WorldWideScience

Sample records for random measurement error

  1. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  2. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  3. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  4. Error Bounds Due to Random Noise in Cylindrical Near-Field Measurements

    OpenAIRE

    Romeu Robert, Jordi; Jofre Roca, Lluís

    1991-01-01

    The far field errors due to near field random noise are statistically bounded when performing cylindrical near to far field transform. In this communication, the far field noise variance it is expressed as a function of the measurement parameters and the near field noise variance. Peer Reviewed

  5. Random errors revisited

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    2000-01-01

    the random errors of estimates of the sound intensity in, say, one-third octave bands from the power and cross power spectra of the signals from an intensity probe determined with a dual channel FFT analyser. This is not very practical, though. In this paper it is demonstrated that one can predict the random...

  6. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  7. Invariant measures and error bounds for random walks in the quarter-plane based on sums of geometric terms

    NARCIS (Netherlands)

    Chen, Y.; Boucherie, Richardus J.; Goseling, Jasper

    2016-01-01

    We consider homogeneous random walks in the quarter-plane. The necessary conditions which characterize random walks of which the invariant measure is a sum of geometric terms are provided in Chen et al. (arXiv:1304.3316, 2013, Probab Eng Informational Sci 29(02):233–251, 2015). Based on these

  8. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We...

  9. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  10. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  11. Geometric errors measurement for coordinate measuring machines

    Science.gov (United States)

    Pan, Fangyu; Nie, Li; Bai, Yuewei; Wang, Xiaogang; Wu, Xiaoyan

    2017-08-01

    Error compensation is a good choice to improve Coordinate Measuring Machines’ (CMM) accuracy. In order to achieve the above goal, the basic research is done. Firstly, analyzing the error source which finds out 21 geometric errors affecting CMM’s precision seriously; secondly, presenting the measurement method and elaborating the principle. By the experiment, the feasibility is validated. Therefore, it lays a foundation for further compensation which is better for CMM’s accuracy.

  12. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  13. Capture-recapture method for estimating misclassification errors: application to the measurement of vaccine efficacy in randomized controlled trials.

    Science.gov (United States)

    Simondon, F; Khodja, H

    1999-02-01

    The measure of efficacy is optimally performed by randomized controlled trials. However, low specificity of the judgement criteria is known to bias toward lower estimation, while low sensitivity increases the required sample size. A common technique for ensuring good specificity without a drop in sensitivity is to use several diagnostic tests in parallel, with each of them being specific. This approach is similar to the more general situation of case-counting from multiple data sources, and this paper explores the application of the capture-recapture method for the analysis of the estimates of efficacy. An illustration of this application is derived from a study on the efficacy of pertussis vaccines where the outcome was based on > or =21 days of cough confirmed by at least one of three criteria performed independently for each subject: bacteriology, serology, or epidemiological link. Log-linear methods were applied to these data considered as three sources of information. The best model considered the three simple effects and an interaction term between bacteriology and epidemiological linkage. Among the 801 children experiencing > or =21 days of cough, it was estimated that 93 cases were missed, leading to a corrected total of 413 confirmed cases. The relative vaccine efficacy estimated from the same model was 1.50 (95% confidence interval: 1.24-1.82), similar to the crude estimate of 1.59 and confirming better protection afforded by one of the two vaccines. This method allows supporting analysis to interpret primary estimates of vaccine efficacy.

  14. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  15. Q-circle measurement error

    Science.gov (United States)

    Hearn, Chase P.; Bradshaw, Edward S.

    1991-05-01

    High-Q lumped and distributed networks near resonance are generally modeled as elementary three element RLC circuits. The widely used Q-circle measurement technique is based on this assumption. It is shown that this assumption can lead to errors when measuring the Q-factor of more complex resonators, particularly when heavily loaded by the external source. In the Q-circle technique, the resonator is assumed to behave as a pure series (or parallel) RLC circuit and the intercept frequencies are found experimentally at which the components of impedance satisfy the absolute value of Im(Z) = Re(Z) (unloaded Q) and absolute value of Im(Z) = Ro+Re(Z) (loaded Q). The Q-factor is then determined as the ratio of the resonant frequency to the intercept bandwidth. This relationship is exact for simple series or parallel RLC circuits, regardless of the Q-factor, but not for more complex circuits. This is shown to be due to the fact that the impedance components of the circuit vary with frequency differently from those in a pure series RLC circuit, causing the Q-factor as determined above to be in error.

  16. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  17. Protecting weak measurements against systematic errors

    OpenAIRE

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-01-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution, and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe u...

  18. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  19. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  20. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  1. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    Science.gov (United States)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  2. Influence of measurement error on Maxwell's demon

    Science.gov (United States)

    Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.

    2017-06-01

    In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .

  3. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x-measurements......Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species...... is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...

  4. Measuring Systematic Error with Curve Fits

    Science.gov (United States)

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model.1-3 In this paper I give three examples in which my students use popular curve-fitting software and adjust the theoretical model to account for, and even exploit, the presence of systematic errors in measured data.

  5. Feedback cooling, measurement errors, and entropy production

    Science.gov (United States)

    Munakata, T.; Rosinberg, M. L.

    2013-06-01

    The efficiency of a feedback mechanism depends on the precision of the measurement outcomes obtained from the controlled system. Accordingly, measurement errors affect the entropy production in the system. We explore this issue in the context of active feedback cooling by modeling a typical cold damping setup as a harmonic oscillator in contact with a heat reservoir and subjected to a velocity-dependent feedback force that reduces the random motion. We consider two models that distinguish whether the sensor continuously measures the position of the resonator or directly its velocity (in practice, an electric current). Adopting the standpoint of the controlled system, we identify the ‘entropy pumping’ contribution that describes the entropy reduction due to the feedback control and that modifies the second law of thermodynamics. We also assign a relaxation dynamics to the feedback mechanism and compare the apparent entropy production in the system and the heat bath (under the influence of the controller) to the total entropy production in the super-system that includes the controller. In this context, entropy pumping reflects the existence of hidden degrees of freedom and the apparent entropy production satisfies fluctuation theorems associated with an effective Langevin dynamics.

  6. Measurement Error in Education and Growth Regressions*

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  7. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  8. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  9. Direction of dependence in measurement error models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Merkle, Edgar C; von Eye, Alexander

    2017-09-05

    Methods to determine the direction of a regression line, that is, to determine the direction of dependence in reversible linear regression models (e.g., x→y vs. y→x), have experienced rapid development within the last decade. However, previous research largely rested on the assumption that the true predictor is measured without measurement error. The present paper extends the direction dependence principle to measurement error models. First, we discuss asymmetric representations of the reliability coefficient in terms of higher moments of variables and the attenuation of skewness and excess kurtosis due to measurement error. Second, we identify conditions where direction dependence decisions are biased due to measurement error and suggest method of moments (MOM) estimation as a remedy. Third, we address data situations in which the true outcome exhibits both regression and measurement error, and propose a sensitivity analysis approach to determining the robustness of direction dependence decisions against unreliably measured outcomes. Monte Carlo simulations were performed to assess the performance of MOM-based direction dependence measures and their robustness to violated measurement error assumptions (i.e., non-independence and non-normality). An empirical example from subjective well-being research is presented. The plausibility of model assumptions and links to modern causal inference methods for observational data are discussed. © 2017 The British Psychological Society.

  10. Defining random and systematic error in precipitation interpolation

    Science.gov (United States)

    Lebrenz, H.; Bárdossy, A.

    2012-04-01

    Variogram-based interpolation methods are widely applied for hydrology. Kriging estimates an expectation value and an associated distribution while simulations provide a distribution of possible realizations of the random function at the unknown location. The associated error in both cases is random and characterized by the convergence of its sum over time to zero, being convenient for subsequent hydrological modelling. This study addresses the quantification of a random and a systematic error for the mentioned interpolation methods. Firstly, monthly precipitation observations are fit to a two-parametric, theoretical distribution at each observation point. Prior to interpolation, the observations are decomposed into two distribution parameters and their corresponding quantiles. The distribution parameters and their quantiles are interpolated to the unknown location and finally recomposed back to precipitation amounts. This method bears the capability of addressing two types of errors: a random error defined by simulating the quantiles and associated expectation value of the parameters, and a systematic error defined by simulating the parameters and the expectation value of the quantiles. The defined random error converges over time to zero while the systematic error does not, but creates a bias. With perspective to subsequent hydrological modelling, the input uncertainty of the interpolated (areal) precipitation is thus described by a random and a systematic error.

  11. Protecting weak measurements against systematic errors

    Science.gov (United States)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  12. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  13. At least some errors are randomly generated (Freud was wrong)

    Science.gov (United States)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  14. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  15. Error Separation for Wide Area Film Measurement

    Directory of Open Access Journals (Sweden)

    Shujie LIU

    2014-09-01

    Full Text Available We wanted to use a multiple probes and white light interferometer to measure the surface profile of thin film. However, this system, as assessed with a scanning method, suffers from the presence of a moving stage and systematic sensor errors. In this paper, in order to separate measurement error caused by the moving stage and systematic sensor errors, the least squares analysis is applied to achieve self-calibration in the measurement process. The modeling principle and resolution process of the least squares analysis with multiple probes and autocollimator are introduced and the corresponding theory uncertainty calculation method is also given. Using this method, we analysis the experimental data and obtain a shape close to the real file. Contrasting with the actual value, the bias and uncertainty in the case of different number of probes are discussed. The results demonstrated the feasibility of the constructed multi-ball cantilever system with the autocollimator for measuring thin film with high accuracy.

  16. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    around zero and thicker tails than a normal distribution. In a linear regression model where the explanatory variable is measured with error it is well-known that this gives a downward bias in the absolute value of the corresponding regression parameter (attenuation), Friedman (1957). In non......-linear models it is more difficult to obtain an expression for the bias as it depends on the distribution of the true underlying variable as well as the error distribution. Chesher (1991) give some approximations to very general non-linear models and Stefanski & Carroll (1985) in the logistic regression model...... and the distribution of the underlying true income is skewed then there are valid technical instruments. We investigate how this IV estimation approach works in theory and illustrate it by simulation studies using the findings about the measurement error model for income from the NTS....

  17. Multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J

    2014-11-10

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Accurate test limits under nonnormal measurement error

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.; Otten, G.D.

    1998-01-01

    When screening a production process for nonconforming items the objective is to improve the average outgoing quality level. Due to measurement errors specification limits cannot be checked directly and hence test limits are required, which meet some given requirement, here given by a prescribed

  19. Application of Uniform Measurement Error Distribution

    Science.gov (United States)

    2016-03-18

    specific distribution and the associated joint probability density function ( PDF ). Then, assuming uniformly distributed measurement errors, we will try...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME...calibration tolerance limits but the difference of the observed measurement results of the UUT and the Calibration Standard (CalStd or CAL) is within

  20. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition.

    Directory of Open Access Journals (Sweden)

    Emmanuel Grellety

    Full Text Available It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors.A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable.The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define "health" has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports.

  1. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    ... systematic and random error and their growth rates and different components of growth rate budgets like flux, pure generation, mixed generation and conversion in energy/variance form are investigated in physical domain for medium range tropical (30° S-30°N) weather forecast using daily horizontal wind field at 850 hPa ...

  2. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    Systematic and random error and their growth rate and different components of growth rate budget in energy/variance form are investigated at wavenumber domain for medium range tropical (30°S-30°N) weather forecast using daily horizontal wind field of 850 hPa up to 5-day forecast for the month of June, 2000 of NCEP ...

  3. Measurement error in longitudinal film badge data

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, J.L

    2002-04-01

    The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study

  4. Analysis of Random Segment Errors on Coronagraph Performance

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  5. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    Science.gov (United States)

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  6. Experimental research making methodical errors of analog-digital transformation of random processes

    OpenAIRE

    Єременко, В. С.; Вітрук, Ю. В.

    2005-01-01

     The given results of an experimental research a method of statistical modelling of a methodical error of analog-digital transformation random Gauss processes. The received dependences allow to coordinate characteristics of the analog-digital converter to parameters of the measured process.

  7. Experimental research making methodical errors of analog-digital transformation of random processes

    Directory of Open Access Journals (Sweden)

    В.С. Єременко

    2005-01-01

    Full Text Available  The given results of an experimental research a method of statistical modelling of a methodical error of analog-digital transformation random Gauss processes. The received dependences allow to coordinate characteristics of the analog-digital converter to parameters of the measured process.

  8. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    In the face of seeming dearth of objective methods of estimating measurement error variance and realistically adjusting for the incidence of measurement errors in multilevel models, researchers often indulge in the traditional approach of arbitrary choice of measurement error variance and this has the potential of giving ...

  9. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  10. Incorporating measurement error in n=1 psychological autoregressive modeling

    NARCIS (Netherlands)

    Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive

  11. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  12. Measuring Systematic Error with Curve Fits

    Science.gov (United States)

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  13. The Effect of Maternal Drug Use on Birth Weight: Measurement Error in Binary Variables

    OpenAIRE

    Robert Kaestner; Theodore Joyce; Hassan Wehbeh

    1996-01-01

    This paper develops a method to correct for non-random measurement error in a binary indicator of illicit drugs. Our results suggest that estimates of the effect of self reported prenatal drug use on birth weight are biased upwards by measurement error -- a finding contrary to predictions of a model of random measurement error. We show that more accurate estimates of the true effect of drug use on birth weight can be obtained by using the predicted probability of falsely reporting drug use. T...

  14. The combined measurement and compensation technology for robot motion error

    Science.gov (United States)

    Li, Rui; Qu, Xinghua; Deng, Yonggang; Liu, Bende

    2013-10-01

    Robot parameter errors are mainly caused by the kinematic parameter errors and the moving angle errors. The calibration of the kinematic parameter errors and the regularity of each axis moving angle errors are mainly researched in this paper. The errors can be compensated by the error model through pre-measurement. So robot kinematic system accuracy can be improved in the case where there are no external devices for real-time measurement. Combination measuring system which is based on the laser tracker and the biaxial orthogonal inertial measuring instrument is designed and built in the paper. The laser tracker is used to build the robot kinematic parameter error model which is based on the minimum constraint of distance error. The biaxial orthogonal inertial measuring instrument is used to obtain the moving angle error model of each axis. The model is preset when the robot is moving in the predetermined path to get the exam movement error and the compensation quantity is feedback to robot controller module of moving axis to compensation the angle. The robot kinematic parameter calibration bases on distance error model and the distribution law of each axis movement error are discussed in this paper. The laser tracker is applied to prove that the method can effectively improve the control accuracy of the robot system.

  15. A multisite analysis of temporal random errors in soil CO2 efflux

    Science.gov (United States)

    Cueva, Alejandro; Bahn, Michael; Litvak, Marcy; Pumpanen, Jukka; Vargas, Rodrigo

    2015-04-01

    An important component of the terrestrial carbon balance is the efflux of CO2 from soils to the atmosphere, which is strongly influenced by changes in soil moisture and temperature. Continuous measurements of soil CO2 efflux are available around the world, and there is a need to develop and improve analyses to better quantify the precision of the measurements. We focused on random errors in measurements, which are caused by unknown and unpredictable changes such as fluctuating environmental conditions. We used the CO2 gradient flux method with two different algorithms to study the temporal variation of soil CO2 efflux and associated random errors at four different ecosystems with wide ranges in mean annual temperature, soil moisture, and soil CO2 efflux. Our results show that random errors were better explained by a double-exponential distribution, had a mean value close to zero, were nonheteroscedastic, and were independent of soil moisture conditions. Random errors increased with the magnitude of soil CO2 efflux and scale isometrically (scaling exponent ≈ 1) within and across all sites, with a single relation common to all data. This isometric scaling is unaffected by ecosystem type, soil moisture conditions, and soil CO2 efflux range (maximum and minimum values within an ecosystem). These results suggest larger uncertainty under extreme events that increase soil CO2 efflux rates. The accumulated annual uncertainty due to random errors varied between ±0.38 and ±2.39%. These results provide insights on the scalability of the sensitivity of soil CO2 efflux to changing weather conditions across ecosystems.

  16. Estimation and implications of random errors in whole-body dosimetry for targeted radionuclide therapy

    Science.gov (United States)

    Flux, Glenn D.; Guy, Matthew J.; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A.

    2002-09-01

    For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations.

  17. Identifying and Removing Systematic Error due to Resistance Tolerance from Measurement System of Inclinometer

    Directory of Open Access Journals (Sweden)

    POP Septimiu

    2012-05-01

    Full Text Available This paper is focused on the effect produced by systematic error of measurement devices in monitoring of a system, dam. The effect produced by systematic error in dam monitoring consist in a wrongdescription of dam evolution. Measurement errors lead in a deflection of the dam from the normal evolution. The physical parameter, inclination, needs to be measured with an accuracy of 0.05%. The sensor used is a full differential output voltage. In a measurementdevice an error source is the electronic components imperfections. The performance of measurement instruments depend on resistance tolerance. The error produced by tolerance on a measurement device is a systematic error and in monitoring process become a random error. The measure of transducer with Wheatstone-bridge supposes to use high accuracy resistance of 0.01%. But a high accuracy resistor increases the cost o instruments. The source of systematic error can be eliminated if the transducer is measured without resistance divider. To obtain positive voltage at sensor output this is power supply relative to common mode voltage of analog converter. In this casethe measurement error depends just by ADC. The acquisition is made with a differential converter. To obtain an accuracy of measurement of 0.05% is used a 14 bit converter. The ADC has auto calibration function so the offset and gain errors are internally compensated.

  18. Error Averaging Effect in Parallel Mechanism Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Peng-Hao Hu

    2016-11-01

    Full Text Available Error averaging effect is one of the advantages of a parallel mechanism when individual errors are relatively large. However, further investigation is necessary to clarify the evidence with mathematical analysis and experiment. In the developed parallel coordinate measuring machine (PCMM, which is based on three pairs of prismatic-universal-universal joints (3-PUU, error averaging mechanism was investigated and is analyzed in this report. Firstly, the error transfer coefficients of various errors in the PCMM were studied based on the established error transfer model. It can be shown how the various original errors in the parallel mechanism are averaged and reduced. Secondly, experimental measurements were carried out, including angular errors and straightness errors of three moving sliders. Lastly, solving the inverse kinematics by numerical method of iteration, it can be seen that the final measuring errors of the moving platform of PCMM can be reduced by the error averaging effect in comparison with the attributed geometric errors of three moving slides. This study reveals the significance of the error averaging effect for a PCMM.

  19. Rapid mapping of volumetric machine errors using distance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  20. Color speckle measurement errors using system with XYZ filters

    Science.gov (United States)

    Kinoshita, Junichi; Yamamoto, Kazuhisa; Kuroda, Kazuo

    2017-09-01

    Measurement errors of color speckle are analyzed for a measurement system equipped with revolving XYZ filters and a 2D sensor. One of the errors is caused by the filter characteristics unfitted to the ideal color matching functions. The other is caused by uncorrelations among the optical paths via the XYZ filters. The unfitted color speckle errors of all the pixel data can be easily calibrated by conversion between the measured BGR chromaticity triangle and the true triangle obtained by the BGR wavelength measurements. For the uncorrelated errors, the measured BGR chromaticity values spread over around the true values. As a result, it would be more complicated to calibrate the uncorrelated errors, repeating the triangular conversion pixel by pixel. Color speckle and its errors greatly affect also chromaticity measurements and image quality of displays using coherent light sources.

  1. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  2. Error analysis of sensor measurements in a small UAV

    OpenAIRE

    Ackerman, James S.

    2005-01-01

    This thesis focuses on evaluating the measurement errors in the gimbal system of the SUAV autonomous aircraft developed at NPS. These measurements are used by the vision based target position estimation system developed at NPS. Analysis of the errors inherent in these measurements will help direct future investment in better sensors to improve the estimation system's performance.

  3. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  4. Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data

    Directory of Open Access Journals (Sweden)

    Ariel Linden

    2015-01-01

    Full Text Available The patient activation measure (PAM is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve, using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum and 44.4% (maximum. Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.

  5. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  6. Pressure Change Measurement Leak Testing Errors

    Energy Technology Data Exchange (ETDEWEB)

    Pryor, Jeff M [ORNL; Walker, William C [ORNL

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  7. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  8. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  9. Comments on "A New Random-Error-Correction Code"

    DEFF Research Database (Denmark)

    Paaske, Erik

    1979-01-01

    This correspondence investigates the error propagation properties of six different systems using a (12, 6) systematic double-error-correcting convolutional encoder and a one-step majority-logic feedback decoder. For the generally accepted assumption that channel errors are much more likely to occur...

  10. Compensation for straightness measurement systematic errors in six degree-of-freedom motion error simultaneous measurement system.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin

    2015-04-10

    The straightness measurement systematic errors induced by error crosstalk, fabrication and installation deviation of optical element, measurement sensitivity variation, and the Abbe error in six degree-of-freedom simultaneous measurement system are analyzed in detail in this paper. Models for compensating these systematic errors were established and verified through a series of comparison experiments with the Automated Precision Inc. (API) 5D measurement system, and the experimental results showed that the maximum deviation in straightness error measurement could be reduced from 6.4 to 0.9 μm in the x-direction, and 8.8 to 0.8 μm in the y-direction, after the compensation.

  11. Correlated measurement error hampers association network inference

    NARCIS (Netherlands)

    Kaduk, M.; Hoefsloot, H.C.J.; Vis, D.J.; Reijmers, T.; Greef, J. van der; Smilde, A.K.; Hendriks, M.M.W.B.

    2014-01-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the

  12. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  13. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  14. Impact of Hydraulic Property Measurement Errors on Geostatistical Characterization and Stochastic Flow and Transport Modeling

    Science.gov (United States)

    Holt, R. M.

    2001-12-01

    It has long been recognized that the spatial variability of hydraulic properties in heterogeneous geologic materials directly controls the movement of contaminants in the subsurface. Heterogeneity is typically described using spatial statistics (mean, variance, and correlation length) determined from measured properties. These spatial statistics can be used in probabilistic (stochastic) flow and transport models. We ask the question, how do measurement errors affect our ability to accurately estimate spatial statistics and reliably apply stochastic models of flow and transport? Spatial statistics of hydraulic properties can be accurately estimated when measurement errors are unbiased. Unfortunately, measurements become spatially biased (i.e., their spatial pattern is systematically distorted) when random observation errors are propagated through non-linear inversion models or inversion models incorrectly describe experimental physics. This type of bias results in distortion of the distribution and variogram of the hydraulic property and errors in stochastic model predictions. We use a Monte Carlo approach to determine the spatial bias in field- and laboratory-estimated unsaturated hydraulic properties subject to simple measurement errors. For this analysis, we simulate measurements in a series of idealized realities and consider only simple measurement errors that can be easily modeled. We find that hydraulic properties are strongly biased by small observation and inversion-model errors. This bias can lead to order-of-magnitude errors in spatial statistics and artificial cross-correlation between measured properties. We also find that measurement errors amplify uncertainty in experimental variograms and can preclude identification of variogram-model parameters. The use of biased spatial statistics in stochastic flow and transport models can yield order-of-magnitude errors in critical transport results. The effects of observation and inversion-model errors are

  15. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  16. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    are considered to be the possible causes for errors. Kamga et al (2000) depicted ECMWF model biases over tropical Africa using summer data of 1995 and conjectured possible suggestions for potential improvement of ECMWF model and data system. Recently, Roy Bhowmik (2004) has estimated the systematic error in ...

  17. Separating variability in healthcare practice patterns from random error.

    Science.gov (United States)

    Thomas, Laine E; Schulte, Phillip J

    2018-01-01

    Improving the quality of care that patients receive is a major focus of clinical research, particularly in the setting of cardiovascular hospitalization. Quality improvement studies seek to estimate and visualize the degree of variability in dichotomous treatment patterns and outcomes across different providers, whereby naive techniques either over-estimate or under-estimate the actual degree of variation. Various statistical methods have been proposed for similar applications including (1) the Gaussian hierarchical model, (2) the semi-parametric Bayesian hierarchical model with a Dirichlet process prior and (3) the non-parametric empirical Bayes approach of smoothing by roughening. Alternatively, we propose that a recently developed method for density estimation in the presence of measurement error, moment-adjusted imputation, can be adapted for this problem. The methods are compared by an extensive simulation study. In the present context, we find that the Bayesian methods are sensitive to the choice of prior and tuning parameters, whereas moment-adjusted imputation performs well with modest sample size requirements. The alternative approaches are applied to identify disparities in the receipt of early physician follow-up after myocardial infarction across 225 hospitals in the CRUSADE registry.

  18. Comparison of Oral Reading Errors between Contextual Sentences and Random Words among Schoolchildren

    Science.gov (United States)

    Khalid, Nursyairah Mohd; Buari, Noor Halilah; Chen, Ai-Hong

    2017-01-01

    This paper compares the oral reading errors between the contextual sentences and random words among schoolchildren. Two sets of reading materials were developed to test the oral reading errors in 30 schoolchildren (10.00±1.44 years). Set A was comprised contextual sentences while Set B encompassed random words. The schoolchildren were asked to…

  19. The impact of data errors on the outcome of randomized clinical trials

    NARCIS (Netherlands)

    Buyse, Marc; Squifflet, Pierre; Coart, Elisabeth; Quinaux, Emmanuel; Punt, Cornelis J. A.; Saad, Everardo D.

    2017-01-01

    Background/aims: Considerable human and financial resources are typically spent to ensure that data collected for clinical trials are free from errors. We investigated the impact of random and systematic errors on the outcome of randomized clinical trials. Methods: We used individual patient data

  20. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    Science.gov (United States)

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-02

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Measuring worst-case errors in a robot workcell

    Energy Technology Data Exchange (ETDEWEB)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  2. Measurement error caused by spatial misalignment in environmental epidemiology.

    Science.gov (United States)

    Gryparis, Alexandros; Paciorek, Christopher J; Zeka, Ariana; Schwartz, Joel; Coull, Brent A

    2009-04-01

    In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area.

  3. Resolution, measurement errors and uncertainties on deflectometric acquisition of large optical surfaces "DaOS"

    Science.gov (United States)

    Hofbauer, E.; Rascher, R.; Friedke, F.; Kometer, R.

    2017-06-01

    The basic physical measurement principle in DaOS is the vignettation of a quasi-parallel light beam emitted by an expanded light source in auto collimation arrangement. The beam is reflected by the surface under test, using invariant deflection by a moving and scanning pentaprism. Thereby nearly any curvature of the specimen is measurable. Resolution, systematic errors and random errors will be shown and explicitly discussed for the profile determination error. Measurements for a "plano-double-sombrero" device will be analyzed and reconstructed to find out the limit of resolution and errors of the reconstruction model and algorithms. These measurements are compared critically to reference results that are recorded by interferometry and Deflectometric Flatness Reference (DFR) method using a scanning penta device.

  4. Analysis of Random Errors in Horizontal Sextant Angles

    Science.gov (United States)

    1980-09-01

    sea horizon, bringing the direct and ref’lected images into coincidence and reading the micrometer and vernier . This is repeated several times...differences due to the direction of rotation of the micrometer drum were examined as well as the variability in the determination of sextant index error. / DD...minutes of arc respec- tively. In addition, systematic errors resulting from angular differences due to the direction of rotation of the micrometer drum

  5. Random measures, theory and applications

    CERN Document Server

    Kallenberg, Olav

    2017-01-01

    Offering the first comprehensive treatment of the theory of random measures, this book has a very broad scope, ranging from basic properties of Poisson and related processes to the modern theories of convergence, stationarity, Palm measures, conditioning, and compensation. The three large final chapters focus on applications within the areas of stochastic geometry, excursion theory, and branching processes. Although this theory plays a fundamental role in most areas of modern probability, much of it, including the most basic material, has previously been available only in scores of journal articles. The book is primarily directed towards researchers and advanced graduate students in stochastic processes and related areas.

  6. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Directory of Open Access Journals (Sweden)

    David Ayllón

    Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  7. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  8. Estimation of slope for measurement error model with equation error: Applications on serum kanamycin data

    Science.gov (United States)

    Saqr, Anwar; Khan, Shahjahan

    2017-05-01

    This paper introduces a statistical method to estimate the parameters of bivariate structural errors-in-variables model (EIV). It is a complex problem when there is no or uncertain prior knowledge of the measurement errors variances. The proposed estimators of the parameters of EIV model are derived based on mathematical modification method for observed data. This method is suggested to reproduce an explanatory variable that has equivalent statistical characteristics of the unobserved explanatory variable, and to correct for the effects of measurement error in predictors. The proposed method produce robust estimators, and it is straightforward, easy to implement, and takes into account the equation errors. The simulation studies show that the new estimator to be generally more efficient and less biased than some other previous approaches. Compared to the maximum likelihood method via the simulation studies, the estimators of the proposed method are nearly asymptotically unbiased and efficient when there is no or uncertain prior knowledge of the measurement errors variances. The numerical comparisons of the simulation studies results are included. In addition, results are illustrated with applications on one well-known real data sets of serum kanamycin.

  9. Random Errors induced by the Superconducting Windings in the LHC Dipoles

    CERN Document Server

    Scandale, Walter; Wolf, R

    2000-01-01

    The problem of estimating the random errors in the LHC dipole is considered. The main contributions to random errors are due to random displacements of the coil position with respect to nominal design and to the variation of the magnetization of the superconducting cable. Coil displacements can be induced either by mechanical tolerances or by the manufacturing process. Analytical and numerical scaling laws that provide the dependence of the random errors due to random displacements on the multipolar order are worked out. Both simplified and more realistic models of the coil structure are analysed. The obtained scaling laws are used to extract from experimental field shape data the amplitude of the coil displacements in the magnet prototypes. Finally, random errors due to interstrand resistance variation during the ramp are estimated

  10. A multisite and multi-model analysis of random errors in soil CO2 efflux across soil water conditions

    Science.gov (United States)

    Cueva, A.; Bahn, M.; Pumpanen, J.; Vargas, R.

    2012-12-01

    Climate change is suggested to influence patterns of precipitation and water availability around the world and these changes are likely to alter ecosystem carbon fluxes. An important component of the ecosystem carbon balance is the efflux of CO2 from soils to the atmosphere, which is strongly influenced by soil moisture and temperature. The increasing application of automated systems is resulting in growing datasets of continuous measurements, which offer the possibility of a consistent uncertainty analysis. Recently, soil CO2 efflux has been frequently estimated from soil CO2 profiling by using the gradient flux method, which is based on the Fick's first law of diffusion, reporting only the measure value, without taking in account systematic and random errors. Improvements in technology and constant equipment calibration can minimize systematic errors; therefore we focused on random errors whose characteristics are generally unknown for soil CO2 efflux. Here, we characterized random errors in soil CO2 effluxes determined with two approaches based on the gradient flux method to calculate soil CO2 efflux in three different types of ecosystems across different soil water conditions. Results showed that random errors tend to differ between approaches. While the two tested models have a similar representation of physical process and input parameters, random errors are distributed differently across the different ranges of soil water content. Differences between random errors are likely to be larger in extreme conditions of soil water content (i.e., dry and wet) suggesting the need for improvement in understanding the biophysical process driving soil CO2 efflux under these conditions.

  11. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  12. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  13. Measurement error of waist circumference: Gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To

  14. Measurement error of waist circumference: gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; van Mechelen, W.

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To

  15. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  16. Measurement errors with low-cost citizen science radiometers

    OpenAIRE

    Bardají, R.; Piera Fernández, Jaume

    2016-01-01

    The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed

  17. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy

    OpenAIRE

    David Ayllón; Roberto Gil-Pita; Fernando Seoane

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measur...

  18. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  19. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  20. Measuring the severity of prescribing errors: a systematic review.

    Science.gov (United States)

    Garfield, Sara; Reynolds, Matthew; Dermont, Liesbeth; Franklin, Bryony Dean

    2013-12-01

    Prescribing errors are common. It has been suggested that the severity as well as the frequency of errors should be assessed when measuring prescribing error rates. This would provide more clinically relevant information, and allow more complete evaluation of the effectiveness of interventions designed to reduce errors. The objective of this systematic review was to describe the tools used to assess prescribing error severity in studies reporting hospital prescribing error rates. The following databases were searched: MEDLINE, EMBASE, International Pharmaceutical Abstracts, and CINAHL (January 1985-January 2013). We included studies that reported the detection and rate of prescribing errors in prescriptions for adult and/or pediatric hospital inpatients, or elaborated on the properties of severity assessment tools used by these studies. Studies not published in English, or that evaluated errors for only one disease or drug class, one route of administration, or one type of prescribing error, were excluded, as were letters and conference abstracts. One reviewer screened all abstracts and obtained complete articles. A second reviewer assessed 10 % of all abstracts and complete articles to check reliability of the screening process. Tools were appraised for country and method of development, whether the tool assessed actual or potential harm, levels of severity assessed, and results of any validity and reliability studies. Fifty-seven percent of 107 studies measuring prescribing error rates included an assessment of severity. Forty tools were identified that assessed severity, only two of which had acceptable reliability and validity. In general, little information was given on the method of development or ease of use of the tools, although one tool required four reviewers and was thus potentially time consuming. The review was limited to studies written in English. One of the review authors was also the author of one of the tools, giving a potential source of bias

  1. Measurements of stem diameter: implications for individual- and stand-level errors.

    Science.gov (United States)

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when

  2. Measurement error models for survey statistics and economic archaeology

    OpenAIRE

    Groß, Marcus

    2016-01-01

    The present work is concerned with so-called measurement error models in applied statistics. The data were analyzed and processed from two very different fields. On the one hand survey and register data, which are used in the Survey statistics and on the other hand anthropological data on prehistoric skeletons. For both fields the problem arises that some variables cannot be measured with sufficient accuracy. This can be due to privacy or measuring inaccuracies. This circumstance can be summa...

  3. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.

  4. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  5. Reliability and measurement error of 3-dimensional regional lumbar motion measures

    DEFF Research Database (Denmark)

    Mieritz, Rune M; Bronfort, Gert; Kawchuk, Greg

    2012-01-01

    The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems.......The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems....

  6. Peer Effects and Measurement Error: The Impact of Sampling Variation in School Survey Data (Evidence from PISA)

    Science.gov (United States)

    Micklewright, John; Schnepf, Sylke V.; Silva, Pedro N.

    2012-01-01

    Investigation of peer effects on achievement with sample survey data on schools may mean that only a random sample of the population of peers is observed for each individual. This generates measurement error in peer variables similar in form to the textbook case of errors-in-variables, resulting in the estimated peer group effects in an OLS…

  7. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  8. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  9. Consistent estimation of linear panel data models with measurement error

    NARCIS (Netherlands)

    Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas

    2017-01-01

    Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the

  10. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  11. Comparing measurement errors for formants in synthetic and natural vowels.

    Science.gov (United States)

    Shadle, Christine H; Nam, Hosung; Whalen, D H

    2016-02-01

    The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry.

  12. Laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters.

    Science.gov (United States)

    Chen, Benyong; Xu, Bin; Yan, Liping; Zhang, Enzheng; Liu, Yanna

    2015-04-06

    A laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters is proposed. The optical configuration of the proposed system is designed and the mathematic model for simultaneously measuring six degrees of freedom parameters of the measured object including three rotational parameters of the yaw, pitch and roll errors and three linear parameters of the horizontal straightness error, vertical straightness error and straightness error's position is established. To address the influence of the rotational errors produced by the measuring reflector in laser straightness interferometer, the compensation method of the straightness error and its position is presented. An experimental setup was constructed and a series of experiments including separate comparison measurement of every parameter, compensation of straightness error and its position and simultaneous measurement of six degrees of freedom parameters of a precision linear stage were performed to demonstrate the feasibility of the proposed system. Experimental results show that the measurement data of the multiple degrees of freedom parameters obtained from the proposed system are in accordance with those obtained from the compared instruments and the presented compensation method can achieve good effect in eliminating the influence of rotational errors on the measurement of straightness error and its position.

  13. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  14. Systematic and Random errors in prostate treatments; Errores sistematicos y aleatorios en los tratamientos de prostata

    Energy Technology Data Exchange (ETDEWEB)

    Quinones Rodriguez, L. A.; Salas Buzon, M. C.; Castro Ramirez, I. J.; Iborra Oquendo, M. A.; Urena Llinares, A.; Angulo Pain, E.; Ramos Caballero, L. J.; Seguro Fernandez, A.; Mora Melendez, R.

    2013-07-01

    Prostate treatments, the largest sources of geometric uncertainty are given by the tumor inter-fraction movements and those associated with the daily patient positioning. One of the solutions that have been taken to eliminate the sources of uncertainty, is the implantation of fiducials marks in the prostate. The objective of this work is the analysis of the data of displacements carried out according to those marks and use these results to calculate what would be the value of systematic and random uncertainty in treatments in which these fiducials marks are not used. (Author)

  15. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  16. Error compensation in random vector double step saccades with and without global adaptation.

    Science.gov (United States)

    Zerr, Paul; Thakkar, Katharine N; Uzunbajakau, Siarhei; Van der Stigchel, Stefan

    2016-10-01

    In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  18. Linear mixed models for replication data to efficiently allow for covariate measurement error.

    Science.gov (United States)

    Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris

    2009-11-10

    It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.

  19. Study of self-compensation of random field errors in low-/β insertion triplets of hadron colliders

    Science.gov (United States)

    Shi, Jicong

    1999-06-01

    The presence of unavoidable field errors in superconducting low-β insertion triplets is one of the major causes for limiting the dynamic aperture of colliders during collisions. Sorting of quadrupoles of the triplets, in which the quadrupoles are installed in the ring according to a certain sequence based on the measured multipole errors, is a way to reduce the adverse effects of random field errors without an increase in the cost. Because of a very small phase advance within each triplet, significant self-compensation of random field errors of the triplet can be achieved even with sorting of a small number of quadrupoles. A study on low-β insertion triplets of the LHC interaction regions show that sorting of the quadrupoles with the vector sorting scheme is quite effective in enlargement of the dynamic aperture and improvement of the linearity of the phase-space region occupied by beams. Since the sorting scheme is based entirely on the local compensation of random errors, the effectiveness of the sorting is independent of the operational condition of the collider.

  20. Modeling observation error and its effects in a random walk/extinction model.

    Science.gov (United States)

    Buonaccorsi, John P; Staudenmayer, John; Carreras, Maximo

    2006-11-01

    This paper examines the consequences of observation errors for the "random walk with drift", a model that incorporates density independence and is frequently used in population viability analysis. Exact expressions are given for biases in estimates of the mean, variance and growth parameters under very general models for the observation errors. For other quantities, such as the finite rate of increase, and probabilities about population size in the future we provide and evaluate approximate expressions. These expressions explain the biases induced by observation error without relying exclusively on simulations, and also suggest ways to correct for observation error. A secondary contribution is a careful discussion of observation error models, presented in terms of either log-abundance or abundance. This discussion recognizes that the bias and variance in observation errors may change over time, the result of changing sampling effort or dependence on the underlying population being sampled.

  1. Error in total ozone measurements arising from aerosol attenuation

    Science.gov (United States)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  2. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation......For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  3. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

    Energy Technology Data Exchange (ETDEWEB)

    PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

    1999-03-29

    All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

  4. Test-Cost-Sensitive Attribute Reduction of Data with Normal Distribution Measurement Errors

    OpenAIRE

    Hong Zhao; Fan Min; William Zhu

    2013-01-01

    The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making based on attribute values of objects, we shall select an attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distrib...

  5. Measurement error in CT assessment of appendix diameter

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2016-12-15

    Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)

  6. Error reduction techniques for measuring long synchrotron mirrors

    Energy Technology Data Exchange (ETDEWEB)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.

  7. Experimental measurement-device-independent quantum random-number generation

    Science.gov (United States)

    Nie, You-Qi; Guan, Jian-Yu; Zhou, Hongyi; Zhang, Qiang; Ma, Xiongfeng; Zhang, Jun; Pan, Jian-Wei

    2016-12-01

    The randomness from a quantum random-number generator (QRNG) relies on the accurate characterization of its devices. However, device imperfections and inaccurate characterizations can result in wrong entropy estimation and bias in practice, which highly affects the genuine randomness generation and may even induce the disappearance of quantum randomness in an extreme case. Here we experimentally demonstrate a measurement-device-independent (MDI) QRNG based on time-bin encoding to achieve certified quantum randomness even when the measurement devices are uncharacterized and untrusted. The MDI-QRNG is randomly switched between the regular randomness generation mode and a test mode, in which four quantum states are randomly prepared to perform measurement tomography in real time. With a clock rate of 25 MHz, the MDI-QRNG generates a final random bit rate of 5.7 kbps. Such implementation with an all-fiber setup provides an approach to construct a fully integrated MDI-QRNG with trusted but error-prone devices in practice.

  8. Functional multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Zoh, Roger S; Bazer, Fuller W; Wu, Guoyao; Carroll, Raymond J

    2017-05-08

    Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body. Therefore, metabolic rate can be approximated by heat production plus some errors. While energy expenditure and metabolic rates are correlated, they are not equivalent. Energy expenditure results from physical function, while metabolism can occur within the body without the occurrence of physical activities. In this manuscript, we present a novel approach for studying the relationship between metabolic rate and indicators of energy expenditure. We do so by extending our previous work on MIMIC ME models to allow responses that are sparsely observed functional data, defining the sparse functional multiple indicators, multiple cause measurement error (FMIMIC ME) models. The mean curves in our proposed methodology are modeled using basis splines. A novel approach for estimating the variance of the classical measurement error based on functional principal components is presented. The model parameters are estimated using the EM algorithm and a discussion of the model's identifiability is provided. We show that the defined model is not a trivial extension of longitudinal or functional data methods, due to the presence of the latent construct. Results from its application to data collected on Zucker diabetic fatty rats are provided. Simulation results investigating the properties of our approach are also presented. © 2017, The International Biometric Society.

  9. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    Science.gov (United States)

    Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.

    2015-02-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.

  10. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.

    Science.gov (United States)

    Song, Xiao; Wang, Ching-Yun

    2014-12-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.

  11. Longitudinal changes in cardiorespiratory fitness: measurement error or true change?

    Science.gov (United States)

    Jackson, Andrew S; Kampert, James B; Barlow, Carolyn E; Morrow, James R; Church, Timothy S; Blair, Steven N

    2004-07-01

    This study examined the thesis that the reported Aerobics Center Longitudinal Study (ACLS) mortality reductions associated with improved cardiorespiratory fitness were because of measurement error of serial treadmill tests. We tested the research hypothesis that longitudinal changes in cardiorespiratory fitness of the ACLS cohort were a multivariate function of changes in self-report physical activity (SR-PA), resting heart rate, and body mass index (BMI). We used the results of three serial maximal treadmill tests (T1, T2, and T3) to evaluate the serial changes in cardiorespiratory fitness of 4675 men. The mean duration between the three serial tests examined was: T2 - T1, 1.9 yr; T3 - T2, 6.1 yr; and T3 - T1, 8.0 yr. Maximum and resting heart rate, BMI, SR-PA, and maximum Balke treadmill duration were measured on each occasion. General linear models analysis showed that with change in maximum heart rate statistically controlled change in treadmill time performance was a function of independent changes in SR-PA, BMI, and R-HR. These variables accounted for significant (P heart rate gained the most fitness between serial tests. These results support the research hypothesis tested. Variations in serial ACLS treadmill tests are not just due to measurement error alone, but also to systematic variation linked with changes in lifestyle.

  12. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  13. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  14. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  15. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  16. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    Science.gov (United States)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  17. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  18. On Characterization of Elasticity Parameters in Context of Measurement Errors

    Science.gov (United States)

    Slawinski, M. A.

    2007-12-01

    In this presentation, we discuss the one-to-one relation between the elasticity parameters and the traveltime and polarization of a propagating signal in the context of the measurement errors. The one-to-one relationship between seismic measurements and a model postulated in the realm of the constitutive equation of an elastic continuum provides the link between the observational and theoretical aspects of seismic tomography [1]. The existence of this link encourages us to develop methods of inferring the elasticity parameters from measurements. However, a consideration of required accuracy and the analysis of error sensitivity suggest that the pragmatic application of this one-to-one relationship might be a difficult task indeed [4]. There are eight symmetry classes of an elastic continuum whose properties are contained in the density-scaled elasticity tensor [6]. Given this tensor in an arbitrary coordinate system, we can identify to which symmetry class it belongs, as well as obtain the orientation of its symmetry axes and planes, and hence the elasticity parameters in a natural coordinate system [2]. To obtain the tensor to be studied, we consider either ray velocities and polarizations [1] or wavefront slownesses and polarizations [5]. For the former, we assume that the medium is homogeneous in order to invoke the straightness of rays to calculate ray velocity given the source and receiver position; for the latter, we assume that the medium is homogeneous in at least one direction in order to invoke the ray parameter. In spite of the limitations due to homogeneities, both approaches are sensitive to measurement errors, which are not negligible. In view of these observational concerns [4], we consider several weaker objectives based on the theoretical formulation. Rather than distinguishing among eight symmetry classes and obtaining the corresponding elasticity parameters, we might be able to distinguish among a few groups that contain several classes within

  19. Bayesian adjustment for covariate measurement errors: a flexible parametric approach.

    Science.gov (United States)

    Hossain, Shahadut; Gustafson, Paul

    2009-05-15

    In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.

  20. Measurement error as a source of QT dispersion: a computerised analysis

    NARCIS (Netherlands)

    J.A. Kors (Jan); G. van Herpen (Gerard)

    1998-01-01

    textabstractOBJECTIVE: To establish a general method to estimate the measuring error in QT dispersion (QTD) determination, and to assess this error using a computer program for automated measurement of QTD. SUBJECTS: Measurements were done on 1220 standard simultaneous

  1. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation.

    Science.gov (United States)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-13

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence-steer-Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  2. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  3. CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*

    Science.gov (United States)

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218

  4. Measurement error causes scale-dependent threshold erosion of biological signals in animal movement data.

    Science.gov (United States)

    Bradshaw, Corey J A; Sims, David W; Hays, Graeme C

    2007-03-01

    Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on

  5. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  6. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2017-11-29

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Assessment of Measurement Error when Using the Laser Spectrum Analyzers

    Directory of Open Access Journals (Sweden)

    A. A. Titov

    2015-01-01

    Full Text Available The article dwells on assessment of measurement errors when using the laser spectrum analyzers. It presents the analysis results to show that it is possible to carry out a spectral analysis of both amplitudes and phases of frequency components of signals and to analyze a changing phase of frequency components of radio signals using interferential methods of measurements. It is found that the interferometers with Mach-Zehnder arrangement are most widely used for measurement of signal phase. A possibility to increase resolution when using the combined method as compared to the other considered methods is shown since with its application spatial integration is performed over one coordinate while time integration is done over the other coordinate that is reached by the orthogonal arrangement of modulators relative each other. The article defines a drawback of this method. It is complicatedness and low-speed because of integrator that disables measurement of spectral components of a radio pulse if its width is less than a temporary aperture. There is a proposal to create an advanced option of the spectrum analyzer in which phase is determined through the signal processing. The article presents resolution when using such a spectrum analyzer. It also reviews the possible options for creating devices to measure the phase components of a spectrum depending on the methods applied to measure a phase. The analysis has shown that for phase measurement a time-pulse method is the most perspective. It is found that the known circuits of digital phase-meters using this method cannot be directly used in spectrum analyzers as they are designed for measurement of the phase only of one signal frequency. In this regard a number of circuits were developed to measure the amplitude and phase of frequency components of the radio signal. It is shown that the perspective option of creating a spectrum analyzer is device in which the phase is determined through the signal

  8. Efficacy of Visual-Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study.

    Science.gov (United States)

    McAllister Byun, Tara

    2017-05-24

    This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of sessions to treatment conditions. Seven child and adolescent participants received 20 half-hour sessions of individual treatment over 10 weeks. Within each week, sessions were randomly assigned to feature traditional or biofeedback intervention. Perceptual accuracy of rhotic production was assessed in a blinded, randomized fashion. Each participant's response to the combined treatment package was evaluated by using effect sizes and visual inspection. Differences in the magnitude of response to traditional versus biofeedback intervention were measured with individual randomization tests. Four of 7 participants demonstrated a clinically meaningful response to the combined treatment package. Three of 7 participants showed a statistically significant difference between treatment conditions. In all 3 cases, the magnitude of within-session gains associated with biofeedback exceeded the gains associated with traditional treatment. These results suggest that the inclusion of visual-acoustic biofeedback can enhance the efficacy of intervention for some individuals with residual rhotic errors. Further research is needed to understand which participants represent better or poorer candidates for biofeedback treatment.

  9. Electronic laboratory system reduces errors in National Tuberculosis Program: a cluster randomized controlled trial.

    Science.gov (United States)

    Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F

    2010-08-01

    To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.

  10. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  11. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  12. Regression calibration method for correcting measurement-error bias in nutritional epidemiology.

    Science.gov (United States)

    Spiegelman, D; McDermott, A; Rosner, B

    1997-04-01

    Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros.

  13. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  14. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  15. Errors in GNSS radio occultation data: relevance of the measurement geometry and obliquity of profiles

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-02-01

    Full Text Available Atmospheric profiles retrieved from GNSS (Global Navigation Satellite System radio occultation (RO measurements are increasingly used to validate other measurement data. For this purpose it is important to be aware of the characteristics of RO measurements. RO data are frequently compared with vertical reference profiles, but the RO method does not provide vertical scans through the atmosphere. The average elevation angle of the tangent point trajectory (which would be 90° for a vertical scan is about 40° at altitudes above 70 km, decreasing to about 25° at 20 km and to less than 5° below 3 km. In an atmosphere with high horizontal variability we can thus expect noticeable representativeness errors if the retrieved profiles are compared with vertical reference profiles. We have performed an end-to-end simulation study using high-resolution analysis fields (T799L91 from the European Centre for Medium-Range Weather Forecasts (ECMWF to simulate a representative ensemble of RO profiles via high-precision 3-D ray tracing. Thereby we focused on the dependence of systematic and random errors on the measurement geometry, specifically on the incidence angle of the RO measurement rays with respect to the orbit plane of the receiving satellite, also termed azimuth angle, which determines the obliquity of RO profiles. We analyzed by how much errors are reduced if the reference profile is not taken vertical at the mean tangent point but along the retrieved tangent point trajectory (TPT of the RO profile. The exact TPT can only be determined by performing ray tracing, but our results confirm that the retrieved TPT – calculated from observed impact parameters – is a very good approximation to the "true" one. Systematic and random errors in RO data increase with increasing azimuth angle, less if the TPT is properly taken in to account, since the increasing obliquity of the RO profiles leads to an increasing sensitivity to departures from horizontal

  16. Comparison of B-Spline Model and Iterated Conditional Modes (ICM) For Data With Measurement Error (ME)

    Science.gov (United States)

    Hartatik; Purnomo, Agus

    2017-06-01

    Direct observation results are often used to review the estimation model. However, actual data observation findings still need to be re-examined, because of measurement error factors (ME). In the regression modeling if X is a random variable with Measurement Error then the complicated calculation will not loose from application of Computer and Technology. As is the case for a review of the following model estimation, given data (Xi, Yi), then the regression model is Y i = g(X i ) + ɛ i where Xi is the element i from the predictor variables X and Yi is the element i of the response variable Y. The variable X is the predictor variables From the findings specific observations usually are constants, but generally found X which is a random variable variable or where Fixed value is not constant. In this case is called the regression model Regression Model with the measurement Errors. Purpose of this research are estimated nonparametric model approach with B-Spline Method to review regression with Measurement Errors are ignored and methods Iterative Conditional Mode (ICM) for review Model regression with measurement error.

  17. A spatial error model with continuous random effects and an application to growth convergence

    Science.gov (United States)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  18. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-05-04

    It is commonly recognized that suspended-sediment concentrations in rivers can change rapidly in time and independently of water discharge during important sediment‑transporting events (for example, during floods); thus, suspended-sediment measurements at closely spaced time intervals are necessary to characterize suspended‑sediment loads. Because the manual collection of sufficient numbers of suspended-sediment samples required to characterize this variability is often time and cost prohibitive, several “surrogate” techniques have been developed for in situ measurements of properties related to suspended-sediment characteristics (for example, turbidity, laser-diffraction, acoustics). Herein, we present a new physically based method for the simultaneous measurement of suspended-silt-and-clay concentration, suspended-sand concentration, and suspended‑sand median grain size in rivers, using multi‑frequency arrays of single-frequency side‑looking acoustic-Doppler profilers. The method is strongly grounded in the extensive scientific literature on the incoherent scattering of sound by random suspensions of small particles. In particular, the method takes advantage of theory that relates acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We develop the theory and methods, and demonstrate the application of the method at six study sites on the Colorado River and Rio Grande, where large numbers of suspended-sediment samples have been collected concurrently with acoustic attenuation and backscatter measurements over many years. The method produces acoustical measurements of suspended-silt-and-clay and suspended-sand concentration (in units of mg/L), and acoustical measurements of suspended-sand median grain size (in units of mm) that are generally in good to excellent agreement with concurrent physical measurements of these quantities in the river cross sections at

  19. Pivot and cluster strategy: a preventive measure against diagnostic errors.

    Science.gov (United States)

    Shimizu, Taro; Tokuda, Yasuharu

    2012-01-01

    Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS), encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1) and analytical process (System 2) in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2) of an initial diagnosis made by the physicians' intuitive process (System 1), thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.

  20. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    -prone explanatory variables and adjusts for the incidence of these errors giving rise to more adequate multilevel models. 2.0 Methodology. 2.1. Data Structure. The illustrative data employed was drawn from an educational environment. There.

  1. Precision influence of a phase retrieval algorithm in fractional Fourier domains from position measurement error.

    Science.gov (United States)

    Guo, Cheng; Tan, Jiubin; Liu, Zhengjun

    2015-08-01

    An iterative structure of amplitude-phase retrieval (APR) was proved to obtain more accurate reconstructed data of both amplitude and phase. However, there is not enough analysis of the precise influence from position measurement error and corresponding error correction. We apply the APR in fractional Fourier domains to reconstruct a sample image and describe the corresponding optical implementation. The error model is built to discuss the distribution of the position measurement error. A corrective method is applied to amend the error and obtain a better quality of retrieved image. The numerical results have demonstrated that our methods are feasible and useful to correct the error for various circumstances.

  2. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  3. Comparing methods to measure error in gynecologic cytology and surgical pathology.

    Science.gov (United States)

    Renshaw, Andrew A

    2006-05-01

    Both gynecologic cytology and surgical pathology use similar methods to measure diagnostic error, but differences exist between how these methods have been applied in the 2 fields. To compare the application of methods of error detection in gynecologic cytology and surgical pathology. Review of the literature. There are several different approaches to measuring error, all of which have limitations. Measuring error using reproducibility as the gold standard is a common method to determine error. While error rates in gynecologic cytology are well characterized and methods for objectively assessing error in the legal setting have been developed, meaningful methods to measure error rates in clinical practice are not commonly used and little is known about the error rates in this setting. In contrast, in surgical pathology the error rates are not as well characterized and methods for assessing error in the legal setting are not as well defined, but methods to measure error in actual clinical practice have been characterized and preliminary data from these methods are now available concerning the error rates in this setting.

  4. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  5. Pivot and cluster strategy: a preventive measure against diagnostic errors

    Directory of Open Access Journals (Sweden)

    Shimizu T

    2012-11-01

    Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing

  6. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  7. Grounding the randomness of quantum measurement.

    Science.gov (United States)

    Jaeger, Gregg

    2016-05-28

    Julian Schwinger provided to physics a mathematical reconstruction of quantum mechanics on the basis of the characteristics of sequences of measurements occurring at the atomic level of physical structure. The central component of this reconstruction is an algebra of symbols corresponding to quantum measurements, conceived of as discrete processes, which serve to relate experience to theory; collections of outcomes of identically circumscribed such measurements are attributed expectation values, which constitute the predictive content of the theory. The outcomes correspond to certain phase parameters appearing in the corresponding symbols, which are complex numbers, the algebra of which he finds by a process he refers to as 'induction'. Schwinger assumed these (individually unpredictable) phase parameters to take random, uniformly distributed definite values within a natural range. I have previously suggested that the 'principle of plenitude' may serve as a basis in principle for the occurrence of the definite measured values that are those members of the collections of measurement outcomes from which the corresponding observed statistics derive (Jaeger 2015Found. Phys.45, 806-819. (doi:10.1007/s10701-015-9893-6)). Here, I evaluate Schwinger's assumption in the context of recent critiques of the notion of randomness and explicitly relate the randomness of these phases with the principle of plenitude and, in this way, provide a fundamental grounding for the objective, physically irreducible probabilities, conceived of as graded possibilities, that are attributed to measurement outcomes by quantum mechanics. © 2016 The Author(s).

  8. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  9. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  10. Statistical analysis of compressive low rank tomography with random measurements

    Science.gov (United States)

    Acharya, Anirudh; Guţă, Mădălin

    2017-05-01

    We consider the statistical problem of ‘compressive’ estimation of low rank states (r\\ll d ) with random basis measurements, where r, d are the rank and dimension of the state respectively. We investigate whether for a fixed sample size N, the estimation error associated with a ‘compressive’ measurement setup is ‘close’ to that of the setting where a large number of bases are measured. We generalise and extend previous results, and show that the mean square error (MSE) associated with the Frobenius norm attains the optimal rate rd/N with only O(r log{d}) random basis measurements for all states. An important tool in the analysis is the concentration of the Fisher information matrix (FIM). We demonstrate that although a concentration of the MSE follows from a concentration of the FIM for most states, the FIM fails to concentrate for states with eigenvalues close to zero. We analyse this phenomenon in the case of a single qubit and demonstrate a concentration of the MSE about its optimal despite a lack of concentration of the FIM for states close to the boundary of the Bloch sphere. We also consider the estimation error in terms of a different metric-the quantum infidelity. We show that a concentration in the mean infidelity (MINF) does not exist uniformly over all states, highlighting the importance of loss function choice. Specifically, we show that for states that are nearly pure, the MINF scales as 1/\\sqrt{N} but the constant converges to zero as the number of settings is increased. This demonstrates a lack of ‘compressive’ recovery for nearly pure states in this metric.

  11. Study on error analysis and accuracy improvement for aspheric profile measurement

    Science.gov (United States)

    Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

    2017-06-01

    Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

  12. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    Science.gov (United States)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  13. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  14. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  15. Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement

    Science.gov (United States)

    Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui

    2017-01-01

    Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.

  16. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations......Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...

  17. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  18. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...... and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators...... of the returns, with important implications for the program evaluation literature....

  19. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  20. Space-borne remote sensing of CO2 by IPDA lidar with heterodyne detection: random error estimation

    Science.gov (United States)

    Matvienko, G. G.; Sukhanov, A. Y.

    2015-11-01

    Possibilities of measuring the CO2 column concentration by spaceborne integrated path differential lidar (IPDA) signals in the near IR absorption bands are investigated. It is shown that coherent detection principles applied in the nearinfrared spectral region promise a high sensitivity for the measurement of the integrated dry air column mixing ratio of the CO2. The simulations indicate that for CO2 the target observational requirements (0.2%) for the relative random error can be met with telescope aperture 0.5 m, detector bandwidth 10 MHz, laser energy per impulse 0.3 mJ and averaging 7500 impulses. It should also be noted that heterodyne technique allows to significantly reduce laser power and receiver overall dimensions compared to direct detection.

  1. The misinterpretation of the standard error of measurement in medical education: a primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement.

    Science.gov (United States)

    McManus, I C

    2012-01-01

    In high-stakes assessments in medical education, such as final undergraduate examinations and postgraduate assessments, an attempt is frequently made to set confidence limits on the probable true score of a candidate. Typically, this is carried out using what is referred to as the standard error of measurement (SEM). However, it is often the case that the wrong formula is applied, there actually being three different formulae for use in different situations. To explain and clarify the calculation of the SEM, and differentiate three separate standard errors, which here are called the standard error of measurement (SEmeas), the standard error of estimation (SEest) and the standard error of prediction (SEpred). Most accounts describe the calculation of SEmeas. For most purposes, though, what is required is the standard error of estimation (SEest), which has to be applied not to a candidate's actual score but to their estimated true score after taking into account the regression to the mean that occurs due to the unreliability of an assessment. A third formula, the standard error of prediction (SEpred) is less commonly used in medical education, but is useful in situations such as counselling, where one needs to predict a future actual score on an examination from a previous actual score on the same examination. The various formulae can produce predictions that differ quite substantially, particularly when reliability is not particularly high, and the mark in question is far removed from the average performance of candidates. That can have important, unintended consequences, particularly in a medico-legal context.

  2. Error Correcting Coding of Telemetry Information for Channel with Random Bit Inversions and Deletions

    Directory of Open Access Journals (Sweden)

    M. A. Elshafey

    2014-01-01

    Full Text Available This paper presents a method of error-correcting coding of digital information. Feature of this method is the treatment of cases of inversion and skip bits caused by a violation of the synchronization of the receiving and transmitting device or other factors. The article gives a brief overview of the features, characteristics, and modern methods of construction LDPC and convolutional codes, as well as considers a general model of the communication channel, taking into account the probability of bits inversion, deletion and insertion. The proposed coding scheme is based on a combination of LDPC coding and convolution coding. A comparative analysis of the proposed combined coding scheme and a coding scheme containing only LDPC coder is performed. Both of the two schemes have the same coding rate. Experiments were carried out on two models of communication channels at different probability values of bit inversion and deletion. The first model allows only random bit inversion, while the other allows both random bit inversion and deletion. In the experiments research and analysis of the delay decoding of convolutional coder is performed and the results of these experimental studies demonstrate the feasibility of planted coding scheme to improve the efficiency of data recovery that is transmitted over a communication channel with noises which allow random bit inversion and deletion without decreasing the coding rate.

  3. An empirical study of the complexity and randomness of prediction error sequences

    Science.gov (United States)

    Ratsaby, Joel

    2011-07-01

    We investigate a population of binary mistake sequences that result from learning with parametric models of different order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is defined as the ratio between the lengths of the compressed to uncompressed files that contain the learner's decision rule. The results indicate that good learners have a low information density ρ while bad learners have a high ρ. Bad learners generate mistake sequences that are atypically complex or diverge stochastically from a purely random Bernoulli sequence. Good learners generate typically complex sequences with low divergence from Bernoulli sequences and they include mistake sequences generated by the Bayes optimal predictor. Based on the static algorithmic interference model of [18] the learner here acts as a static structure which "scatters" the bits of an input sequence (to be predicted) in proportion to its information density ρ thereby deforming its randomness characteristics.

  4. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  5. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  6. A Unified Approach to Measurement Error and Missing Data: Overview and Applications

    Science.gov (United States)

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…

  7. A Unified Approach to Measurement Error and Missing Data: Details and Extensions

    Science.gov (United States)

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…

  8. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  9. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  10. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  11. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  12. Sources of measurement error in laser Doppler vibrometers and proposal for unified specifications

    Science.gov (United States)

    Siegmund, Georg

    2008-06-01

    The focus of this paper is to disclose sources of measurement error in laser Doppler vibrometers (LDV) and to suggest specifications, suitable to describe their impact on measurement uncertainty. Measurement errors may be caused by both the optics and electronics sections of an LDV, caused by non-ideal measurement conditions or imperfect technical realisation. While the contribution of the optics part can be neglected in most cases, the subsequent signal processing chain may cause significant errors. Measurement error due to non-ideal behaviour of the interferometer has been observed mainly at very low vibration amplitudes and depending on the optical arrangement. The paper is organized as follows: Electronic signal processing blocks, beginning with the photo detector, are analyzed with respect to their contribution to measurement uncertainty. A set of specifications is suggested, adopting vocabulary and definitions known from traditional vibration measurement equipment. Finally a measurement setup is introduced, suitable for determination of most specifications utilizing standard electronic measurement equipment.

  13. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    Science.gov (United States)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  14. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  15. The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment

    Directory of Open Access Journals (Sweden)

    C. Rödenbeck

    2006-01-01

    Full Text Available Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability. As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ, if systematic differences (and their changes are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.

  16. The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment

    Science.gov (United States)

    Rödenbeck, C.; Conway, T. J.; Langenfelds, R. L.

    2006-01-01

    Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability). As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ), if systematic differences (and their changes) are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.

  17. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  18. Detecting genotyping error using measures of degree of Hardy-Weinberg disequilibrium.

    Science.gov (United States)

    Attia, John; Thakkinstian, Ammarin; McElduff, Patrick; Milne, Elizabeth; Dawson, Somer; Scott, Rodney J; Klerk, Nicholas de; Armstrong, Bruce; Thompson, John

    2010-01-01

    Tests for Hardy-Weinberg equilibrium (HWE) have been used to detect genotyping error, but those tests have low power unless the sample size is very large. We assessed the performance of measures of departure from HWE as an alternative way of screening for genotyping error. Three measures of the degree of disequilibrium (alpha, ,D, and F) were tested for their ability to detect genotyping error of 5% or more using simulations and a real dataset of 184 children with leukemia genotyped at 28 single nucleotide polymorphisms. The simulations indicate that all three disequilibrium coefficients can usefully detect genotyping error as judged by the area under the Receiver Operator Characteristic (ROC) curve. Their discriminative ability increases as the error rate increases, and is greater if the genotyping error is in the direction of the minor allele. Optimal thresholds for detecting genotyping error vary for different allele frequencies and patterns of genotyping error but allele frequency-specific thresholds can be nominated. Applying these thresholds would have picked up about 90% of genotyping errors in our actual dataset. Measures of departure from HWE may be useful for detecting genotyping error, but this needs to be confirmed in other real datasets.

  19. Sharing is caring? Measurement error and the issues arising from combining 3D morphometric datasets.

    Science.gov (United States)

    Fruciano, Carmelo; Celik, Mélina A; Butler, Kaylene; Dooley, Tom; Weisbecker, Vera; Phillips, Matthew J

    2017-09-01

    Geometric morphometrics is routinely used in ecology and evolution and morphometric datasets are increasingly shared among researchers, allowing for more comprehensive studies and higher statistical power (as a consequence of increased sample size). However, sharing of morphometric data opens up the question of how much nonbiologically relevant variation (i.e., measurement error) is introduced in the resulting datasets and how this variation affects analyses. We perform a set of analyses based on an empirical 3D geometric morphometric dataset. In particular, we quantify the amount of error associated with combining data from multiple devices and digitized by multiple operators and test for the presence of bias. We also extend these analyses to a dataset obtained with a recently developed automated method, which does not require human-digitized landmarks. Further, we analyze how measurement error affects estimates of phylogenetic signal and how its effect compares with the effect of phylogenetic uncertainty. We show that measurement error can be substantial when combining surface models produced by different devices and even more among landmarks digitized by different operators. We also document the presence of small, but significant, amounts of nonrandom error (i.e., bias). Measurement error is heavily reduced by excluding landmarks that are difficult to digitize. The automated method we tested had low levels of error, if used in combination with a procedure for dimensionality reduction. Estimates of phylogenetic signal can be more affected by measurement error than by phylogenetic uncertainty. Our results generally highlight the importance of landmark choice and the usefulness of estimating measurement error. Further, measurement error may limit comparisons of estimates of phylogenetic signal across studies if these have been performed using different devices or by different operators. Finally, we also show how widely held assumptions do not always hold true

  20. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  1. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  2. Error Sources in the ETA Energy Analyzer Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Nexsen, W E

    2004-12-13

    At present the ETA beam energy as measured by the ETA energy analyzer and the DARHT spectrometer differ by {approx}12%. This discrepancy is due to two sources, an overestimate of the effective length of the ETA energy analyzer bending-field, and data reduction methods that are not valid. The discrepancy can be eliminated if we return to the original process of measuring the angular deflection of the beam and use a value of 43.2cm for the effective length of the axial field profile.

  3. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements.

    Science.gov (United States)

    Sedlak, Steffen M; Bruetzel, Linda K; Lipfert, Jan

    2017-04-01

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.

  4. From Measurements Errors to a New Strain Gauge Design

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...

  5. Comparing objective and subjective error measures for color constancy

    NARCIS (Netherlands)

    Lucassen, M.P.; Gijsenij, A.; Gevers, T.

    2008-01-01

    We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color

  6. HyDEn: a hybrid steganocryptographic approach for data encryption using randomized error-correcting DNA codes.

    Science.gov (United States)

    Tulpan, Dan; Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge

    2013-01-01

    This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  7. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    Directory of Open Access Journals (Sweden)

    Dan Tulpan

    2013-01-01

    Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  8. Eddy-covariance flux errors due to biases in gas concentration measurements: origins, quantification and correction

    Science.gov (United States)

    Fratini, G.; McDermitt, D. K.; Papale, D.

    2013-08-01

    Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (licor.com/eddypro"target="_blank">www.licor.com/eddypro).

  9. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...

  10. Measurement error in income and schooling, and the bias for linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...

  11. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serio...... on the predictions, the approach seems to provide more accurate predictions than the naive approach. Predictions of water content of fish fillets from low-field NMR relaxations are used as examples to show the applicability of the methods. (C) 2004 Elsevier B.V. All rights reserved....

  12. Total Differential Errors in One-Port Network Analyzer Measurements with Application to Antenna Impedance

    Directory of Open Access Journals (Sweden)

    P. Zimourtopoulos

    2007-06-01

    Full Text Available The objective was to study uncertainty in antenna input impedance resulting from full one-port Vector Network Analyzer (VNA measurements. The VNA process equation in the reflection coefficient ρ of a load, its measurement m and three errors Es, determinable from three standard loads and their measurements, was considered. Differentials were selected to represent measurement inaccuracies and load uncertainties (Differential Errors. The differential operator was applied on the process equation and the total differential error dρ for any unknown load (Device Under Test DUT was expressed in terms of dEs and dm, without any simplification. Consequently, the differential error of input impedance Z -or any other physical quantity differentiably dependent on ρ- is expressible. Furthermore, to express precisely a comparison relation between complex differential errors, the geometric Differential Error Region and its Differential Error Intervals were defined. Practical results are presented for an indoor UHF ground-plane antenna in contrast with a common 50 Ω DC resistor inside an aluminum box. These two built, unshielded and shielded, DUTs were tested against frequency under different system configurations and measurement considerations. Intermediate results for Es and dEs characterize the measurement system itself. A number of calculations and illustrations demonstrate the application of the method.

  13. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2017-11-11

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  14. Measurement error of surface-mounted fiber Bragg grating temperature sensor.

    Science.gov (United States)

    Yi, Liu; Zude, Zhou; Erlong, Zhang; Jun, Zhang; Yuegang, Tan; Mingyao, Liu

    2014-06-01

    Fiber Bragg grating (FBG) sensors are extensively used to measure surface temperatures. However, the temperature gradient effect of a surface-mounted FBG sensor is often overlooked. A surface-type temperature standard setup was prepared in this study to investigate the measurement errors of FBG temperature sensors. Experimental results show that the measurement error of a bare fiber sensor has an obvious linear relationship with surface temperature, with the largest error achieved at 8.1 °C. Sensors packaged with heat conduction grease generate smaller measurement errors than do bare FBG sensors and commercial thermal resistors. Thus, high-quality packaged methods and proper modes of fixation can effectively improve the accuracy of FBG sensors in measuring surface temperatures.

  15. Intrinsic measurement errors for the speed of light in vacuum

    Science.gov (United States)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  16. Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements

    Science.gov (United States)

    Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.

    2012-12-01

    This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.

  17. Regression calibration for classical exposure measurement error in environmental epidemiology studies using multiple local surrogate exposures.

    Science.gov (United States)

    Bateson, Thomas F; Wright, J Michael

    2010-08-01

    Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor.

  18. Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients

    DEFF Research Database (Denmark)

    Müller, Monika; Biurrun Manresa, José; Limacher, Andreas

    2017-01-01

    clinical setting. METHODS: We calculated intraclass correlation coefficients and performed a Bland-Altman analysis. RESULTS: Intraclass correlation coefficients were all clearly greater than 0.75, and Bland-Altman analysis showed minute systematic errors with small point estimates and narrow 95% confidence......BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...... measurement error and number of records. We determined the measurement error of a single versus the mean of 3 records of pressure pain detection threshold (PPDT), electrical pain detection threshold (EPDT), and nociceptive withdrawal reflex threshold (NWRT) in 429 chronic pain patients recruited in a routine...

  19. Measurement and Predition Errors in Body Composition Assessment and the Search for the Perfect Prediction Equation.

    Science.gov (United States)

    Katch, Frank I.; Katch, Victor L.

    1980-01-01

    Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)

  20. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  1. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    Science.gov (United States)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  2. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  3. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  4. Biometrics based key management of double random phase encoding scheme using error control codes

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2013-08-01

    In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.

  5. Field error lottery

    Science.gov (United States)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  6. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  7. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  8. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  9. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  10. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  11. Efficacy of Visual-Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study

    Science.gov (United States)

    Byun, Tara McAllister

    2017-01-01

    Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…

  12. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error.

    Science.gov (United States)

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J

    2017-11-01

    Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.

  13. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  14. Determining sexual dimorphism in frog measurement data: integration of statistical significance, measurement error, effect size and biological significance

    Directory of Open Access Journals (Sweden)

    Hayek Lee-Ann C.

    2005-01-01

    Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.

  15. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    Science.gov (United States)

    Biggs, Adam T

    2017-07-01

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  16. Measurement error of global rainbow technique: The effect of recording parameters

    Science.gov (United States)

    Wu, Xue-cheng; Li, Can; Jiang, Hao-yu; Cao, Jian-zheng; Chen, Ling-hong; Gréhan, Gerard; Cen, Ke-fa

    2017-11-01

    Rainbow refractometry can measure refractive index and size of spray droplets simultaneously. Recording parameters of global rainbow imaging system, such as recording distance and scattering angle recording range, play a vital role in in-situ high accuracy measurement. In the paper, a theoretical and experimental investigation on the effect of recording parameters on measurement error of global rainbow technique was carried out for the first time. The relation of the two recording parameters, and the monochromatic aberrations in global rainbow imaging system were analyzed. In the framework of Lorenz-Mie theory and modified Nussenzveig theory with correction coefficients, measurement error curves of refractive index and size of the droplets caused by aberrations for different recording parameters were simulated. The simulated results showed that measurement error increased with RMS radius of diffuse spot; a long recording distance and a large scattering angle recording range both caused a larger diffuse spot; recording parameters were indicated to have a great effect on refractive index measurement error, but have little effect on measurement of droplet size. A sharp rise in spot radius at large recording parameters was mainly due to spherical aberration and coma. To confirm some of the conclusions, an experiment was conducted. The experimental results showed that the refractive index measurement error was as high as 1 . 3 × 10-3 for a recording distance of 31 cm. In the case, recording parameters are suggested to be set to as small a value as possible under the same optical elements.

  17. The analysis and measurement of motion errors of the linear slide in fast tool servo diamond turning machine

    Directory of Open Access Journals (Sweden)

    Xu Zhang

    2015-03-01

    Full Text Available This article proposes a novel method for identifying the motion errors (mainly straightness error and angular error of a linear slide, which is based on the laser interferometry technique integrated with the shifting method. First, the straightness error of a linear slide incorporated with angular error (pitch error in the vertical direction and yaw error in the horizontal direction is schematically explained. Then, a laser interferometry–based system is constructed to measure the motion errors of a linear slide, and an algorithm of error separation technique for extracting the straightness error, angular error, and tilt angle error caused by the motion of the reflector is developed. In the proposed method, the reflector is mounted on the slide moving along the guideway. The light-phase variation of two interfering laser beams can identify the lateral translation error of the slide. The differential outputs sampled with shifting initial point at the same datum line are applied to evaluate the angular error of the slide. Furthermore, the yaw error of the slide is measured by a laser interferometer in laboratory environment and compared with the evaluated values. Experimental results demonstrate that the proposed method possesses the advantages of reducing the effects caused by the assembly error and the tilt angle errors caused by movement of the reflector, adapting to long- or short-range measurement, and operating the measurement experiment conveniently and easily.

  18. Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables

    Directory of Open Access Journals (Sweden)

    Ming He

    2015-11-01

    Full Text Available We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence. We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993. We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992 to illustrate our testing procedures.

  19. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...... to a level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained...

  20. Pseudo-random-bit-sequence phase modulation for reduced errors in a fiber optic gyroscope.

    Science.gov (United States)

    Chamoun, Jacob; Digonnet, Michel J F

    2016-12-15

    Low noise and drift in a laser-driven fiber optic gyroscope (FOG) are demonstrated by interrogating the sensor with a low-coherence laser. The laser coherence was reduced by broadening its optical spectrum using an external electro-optic phase modulator driven by either a sinusoidal or a pseudo-random bit sequence (PRBS) waveform. The noise reduction measured in a FOG driven by a modulated laser agrees with the calculations based on the broadened laser spectrum. Using PRBS modulation, the linewidth of a laser was broadened from 10 MHz to more than 10 GHz, leading to a measured FOG noise of only 0.00073  deg/√h and a drift of 0.023  deg/h. To the best of our knowledge, these are the lowest noise and drift reported in a laser-driven FOG, and this noise is below the requirement for the inertial navigation of aircraft.

  1. Effects of excluding a set of random effects on prediction error variance of breeding value.

    Science.gov (United States)

    Cantet, R J

    1997-01-12

    The effects of excluding a set of random effects (U-effects) uncorrelated to breeding values (BV) on prediction error variance (PEV) is studied analytically. Two situations are considered for model comparison: (a) existence of a 'true' model, (b) uncertainty about which of the competing models is 'true'. Models compared are the 'long' one, which includes BV + U-effects, and the 'short' one which includes BV's as the only random systematic effect. Expressions for PEV(BV) were obtained for the long model (PEVL); the short model (PEVS); and the short model assuming the long model is the correct one (PEVSI). It is shown that in general PEVS ≤ PEVL ≤ PEVSI. Results are exemplified by means of an example including a computer simulation. RESUMEN: En este trabajo se estudia analiticamente el efecto de excluir una variable aleatoria (efecto U) no correlacionada con el valor de cría (BV), sobre la varianza del error de predicción de este último (PEV(BV)). Para ello se utilizan dos enfoques de comparación de modelos: (a) existencia de un modelo 'verdadero', (b) incertidumbre respecto de cuál de ambos modelos alternativos es el correcto. Los modelos que se comparan son: el 'largo', que incluye BV+U, y el 'corto', el cuál solo incluye BV. Se obtienen las expresiones para PEV(BV) en las siguientes situaciones: (1) en el modelo largo (PEVL), (2) en el modelo corto (PEVS), y (3) en el modelo corto pero asumiendo que el largo es el verdadero (PEVSI). Se demuestra que en general PEVS ≤ PEVL ≤ PEVSI. Los resultados obtenidos son ilustrados mediante un ejemplo que incluye una simulación estocástica. ZUSAMMENFASSUNG: Veränderung der Fehlervarianz der Zuchtwertvoraussage durch Vernachlässigung einer Gruppe zufäliger Wirkungen. Es wird die Auswirkung der Ausschaltung einer Gruppe zufälliger Wirkungen (U-effects), die mit Zuchtwerten (BV) nicht korreliert sind, auf die Varianz des Voraussage-Fehlers (PEV) analytisch untersucht. Zwei Modelle werden betrachtet: (a

  2. Development of New Measurement System of Errors in the Multiaxial Machine Tool for an Active Compensation

    Directory of Open Access Journals (Sweden)

    Noureddine Barka

    2016-01-01

    Full Text Available Error compensation techniques have been widely applied to improve multiaxis machine accuracy. However, due to the lack of reliable instrumentation for direct and overall measurements, all the compensation methods are based on offline measurements of each error component separately. The results of these measurements are static in nature and can only reflect the conditions at the moment of measurement. These results are not representative under real working conditions because of disturbances from load deformations, thermal distortions, and dynamic perturbations. This present approach involves the development of a new measurement system capable of dynamically evaluating the errors according to the six degrees of freedom. The developed system allows the generation of useful data that cover all machine states regardless of the operating conditions. The obtained measurements can be used to evaluate the performance of the machine, calibration, and real time compensation of errors. This system is able to perform dynamic measurements reflecting the global accuracy of the machine tool without a long and expensive analysis of various error sources contribution. Finally, the system exhibits compatible metrological characteristics with high precision applications.

  3. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...... in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders Udgivelsesdato: 2009/3/30...

  4. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  5. Uncertainty in Measurement and Total Error: Tools for Coping with Diagnostic Uncertainty.

    Science.gov (United States)

    Theodorsson, Elvar

    2017-03-01

    Laboratory medicine decreases diagnostic uncertainty, but is influenced by factors causing uncertainties. Error and uncertainty methods are commonly seen as incompatible in laboratory medicine. New versions of the Guide to the Expression of Uncertainty in Measurement and International Vocabulary of Metrology will incorporate both uncertainty and error methods, which will assist collaboration between metrology and laboratories. Law of propagation of uncertainty and bayesian statistics are theoretically preferable to frequentist statistical methods in diagnostic medicine. However, frequentist statistics are better known and more widely practiced. Error and uncertainty methods should both be recognized as legitimate for calculating diagnostic uncertainty. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  6. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  7. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise.

    Directory of Open Access Journals (Sweden)

    Reva E Johnson

    Full Text Available The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1 non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2 amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions. We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback.

  8. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise.

    Science.gov (United States)

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-01-01

    The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1) non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2) amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions). We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback.

  9. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    Science.gov (United States)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  10. Linear and nonlinear magnetic error measurements using action and phase jump analysis

    Directory of Open Access Journals (Sweden)

    Javier F. Cardona

    2009-01-01

    Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.

  11. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  12. [Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].

    Science.gov (United States)

    Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong

    2015-11-01

    A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.

  13. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  14. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  15. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  16. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Least-MSE calibration procedures for corrections of measurement and misclassification errors in generalized linear models

    Directory of Open Access Journals (Sweden)

    Parnchit Wattanasaruch

    2012-09-01

    Full Text Available The analyses of clinical and epidemiologic studies are often based on some kind of regression analysis, mainly linearregression and logistic models. These analyses are often affected by the fact that one or more of the predictors are measuredwith error. The error in the predictors is also known to bias the estimates and hypothesis testing results. One of the proceduresfrequently used to handle such problem in order to reduce the measurement errors is the method of regression calibration forpredicting the continuous covariate. The idea is to predict the true value of error-prone predictor from the observed data, thento use the predicted value for the analyses. In this research we develop four calibration procedures, namely probit, complementary log-log, logit, and logistic calibration procedures for corrections of the measurement error and/or the misclassification error to predict the true values for the misclassification explanatory variables used in generalized linear models. Theprocesses give the predicted true values of a binary explanatory variable using the calibration techniques then use thesepredicted values to fit the three models such that the probit, the complementary log-log, and the logit models under the binaryresponse. All of which are investigated by considering the mean square error (MSE in 1,000 simulation studies in each caseof the known parameters and conditions. The results show that the proposed working calibration techniques that can performadequately well are the probit, logistic, and logit calibration procedures. Both the probit calibration procedure and the probitmodel are superior to the logistic and logit calibrations due to the smallest MSE. Furthermore, the probit model-parameterestimates also improve the effects of the misclassification explanatory variable. Only the complementary log-log model andits calibration technique are appropriate when measurement error is moderate and sample size is high.

  18. Quantitative shearography: error reduction by using more than three measurement channels

    Energy Technology Data Exchange (ETDEWEB)

    Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-10

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  19. Quantitative shearography: error reduction by using more than three measurement channels.

    Science.gov (United States)

    Charrett, Tom O H; Francis, Daniel; Tatam, Ralph P

    2011-01-10

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  20. Error analysis and data forecast in the centre of gravity measurement system for small tractors

    NARCIS (Netherlands)

    Jiang, J.D.; Hoogmoed, W.B.; Yingdi, Z.; Xian, Z.

    2011-01-01

    A novel centre of gravity measurement system for small tractors with the principle of the three-point reaction is presented. According to the prototype of a small tractor gravity centre test platform, a mathematic multi-body dynamics prototype was built to analyze the measurement error in the centre

  1. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  2. Visual function after correction of distance refractive error with ready-made and custom spectacles: a randomized clinical trial.

    Science.gov (United States)

    Brady, Christopher J; Villanti, Andrea C; Gandhi, Monica; Friedman, David S; Keay, Lisa

    2012-10-01

    To evaluate patient-reported outcome measures with the use of ready-made spectacles (RMS) and custom spectacles (CS) in an adult population in India with uncorrected refractive error (URE). Prospective, double-masked, randomized trial with 1-month follow-up. A total of 363 adults aged 18 to 45 years with ≥1 diopter (D) of URE (RMS, n = 183; CS, n = 180). All participants received complete refraction and were randomized to receive CS (full sphero-cylindrical correction) or RMS based on the spherical equivalent for the eye with lower refractive error but limited to the powers in the RMS inventory. Visual function and quality of life (VFQoL) instrument and participant satisfaction. Rasch scores for VFQoL increased from 1.14 to 4.37 logits in the RMS group and from 1.11 to 4.72 logits in the CS group: respective mean changes of 3.23 (95% confidence interval [CI], 2.90-3.56) vs. 3.61 (95% CI, 3.34-3.88), respectively. Mean patient satisfaction also increased by 1.83 points (95% CI, 1.60-2.06) on a 5-point Likert scale in the RMS group and by 2.04 points (95% CI, 1.83-2.24) in the CS group. In bivariate analyses, CS was not associated with increased VFQoL or patient satisfaction compared with the RMS group. In the full multivariable linear regression, the CS group had greater improvement when compared with those receiving RMS (+0.45 logits; 95% CI, 0.02-0.88), and subjects with astigmatism >2.00 D had significantly less improvement (-0.99 logits; 95% CI, -1.68 to -0.30) after controlling for demographic and vision-related characteristics. In multivariable analysis, increased change in patient satisfaction was related to demographic and optical characteristics, but not spectacle group. Ready-made spectacles produce large but slightly smaller improvements in VFQoL and similar satisfaction with vision at 1-month follow-up when compared with CS. Ready-made spectacles are suitable for the majority of individuals with URE in our study population, although those with high

  3. Comparison of error-based and errorless learning for people with severe traumatic brain injury: study protocol for a randomized control trial.

    Science.gov (United States)

    Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Shum, David H K; Griffin, Janelle; Schmidt, Julia; Lane-Brown, Amanda; Kendall, Melissa; Chevignard, Mathilde

    2013-11-05

    Poor skills generalization poses a major barrier to successful outcomes of rehabilitation after traumatic brain injury (TBI). Error-based learning (EBL) is a relatively new intervention approach that aims to promote skills generalization by teaching people internal self-regulation skills, or how to anticipate, monitor and correct their own errors. This paper describes the protocol of a study that aims to compare the efficacy of EBL and errorless learning (ELL) for improving error self-regulation, behavioral competency, awareness of deficits and long-term outcomes after TBI. This randomized, controlled trial (RCT) has two arms (EBL and ELL); each arm entails 8 × 2 h training sessions conducted within the participants' homes. The first four sessions involve a meal preparation activity, and the final four sessions incorporate a multitasking errand activity. Based on a sample size estimate, 135 participants with severe TBI will be randomized into either the EBL or ELL condition. The primary outcome measure assesses error self-regulation skills on a task related to but distinct from training. Secondary outcomes include measures of self-monitoring and self-regulation, behavioral competency, awareness of deficits, role participation and supportive care needs. Assessments will be conducted at pre-intervention, post-intervention, and at 6-months post-intervention. This study seeks to determine the efficacy and long-term impact of EBL for training internal self-regulation strategies following severe TBI. In doing so, the study will advance theoretical understanding of the role of errors in task learning and skills generalization. EBL has the potential to reduce the length and costs of rehabilitation and lifestyle support because the techniques could enhance generalization success and lifelong application of strategies after TBI. ACTRN12613000585729.

  4. Maximal-entropy random walk unifies centrality measures

    Science.gov (United States)

    Ochab, J. K.

    2012-12-01

    This paper compares a number of centrality measures and several (dis-)similarity matrices with which they can be defined. These matrices, which are used among others in community detection methods, represent quantities connected to enumeration of paths on a graph and to random walks. Relationships between some of these matrices are derived in the paper. These relationships are inherited by the centrality measures. They include measures based on the principal eigenvector of the adjacency matrix, path enumeration, as well as on the stationary state, stochastic matrix, or mean first-passage times of a random walk. As the random walk defining the centrality measure can be arbitrarily chosen, we pay particular attention to the maximal-entropy random walk, which serves as a very distinct alternative to the ordinary (diffusive) random walk used in network analysis. The various importance measures, defined both with the use of ordinary random walk and the maximal-entropy random walk, are compared numerically on a set of benchmark graphs with varying mixing parameter and are grouped with the use of the agglomerative clustering technique. It is shown that centrality measures defined with the two different random walks cluster into two separate groups. In particular, the group of centrality measures defined by the maximal-entropy random walk does not cluster with any other measures on change of graphs’ parameters, and members of this group produce mutually closer results than members of the group defined by the ordinary random walk.

  5. Maximal-entropy random walk unifies centrality measures.

    Science.gov (United States)

    Ochab, J K

    2012-12-01

    This paper compares a number of centrality measures and several (dis-)similarity matrices with which they can be defined. These matrices, which are used among others in community detection methods, represent quantities connected to enumeration of paths on a graph and to random walks. Relationships between some of these matrices are derived in the paper. These relationships are inherited by the centrality measures. They include measures based on the principal eigenvector of the adjacency matrix, path enumeration, as well as on the stationary state, stochastic matrix, or mean first-passage times of a random walk. As the random walk defining the centrality measure can be arbitrarily chosen, we pay particular attention to the maximal-entropy random walk, which serves as a very distinct alternative to the ordinary (diffusive) random walk used in network analysis. The various importance measures, defined both with the use of ordinary random walk and the maximal-entropy random walk, are compared numerically on a set of benchmark graphs with varying mixing parameter and are grouped with the use of the agglomerative clustering technique. It is shown that centrality measures defined with the two different random walks cluster into two separate groups. In particular, the group of centrality measures defined by the maximal-entropy random walk does not cluster with any other measures on change of graphs' parameters, and members of this group produce mutually closer results than members of the group defined by the ordinary random walk.

  6. Effects of cosine error in irradiance measurements from field ocean color radiometers.

    Science.gov (United States)

    Zibordi, Giuseppe; Bulgarelli, Barbara

    2007-08-01

    The cosine error of in situ seven-channel radiometers designed to measure the in-air downward irradiance for ocean color applications was investigated in the 412-683 nm spectral range with a sample of three instruments. The interchannel variability of cosine errors showed values generally lower than +/-3% below 50 degrees incidence angle with extreme values of approximately 4-20% (absolute) at 50-80 degrees for the channels at 412 and 443 nm. The intrachannel variability, estimated from the standard deviation of the cosine errors of different sensors for each center wavelength, displayed values generally lower than 2% for incidence angles up to 50 degrees and occasionally increasing up to 6% at 80 degrees. Simulations of total downward irradiance measurements, accounting for average angular responses of the investigated radiometers, were made with an accurate radiative transfer code. The estimated errors showed a significant dependence on wavelength, sun zenith, and aerosol optical thickness. For a clear sky maritime atmosphere, these errors displayed values spectrally varying and generally within +/-3%, with extreme values of approximately 4-10% (absolute) at 40-80 degrees sun zenith for the channels at 412 and 443 nm. Schemes for minimizing the cosine errors have also been proposed and discussed.

  7. Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS

    Science.gov (United States)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.

  8. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...... deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors....

  9. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  11. Measurement error analysis for polarization extinction ratio of multifunctional integrated optic chips.

    Science.gov (United States)

    Zhang, Haoliang; Yang, Jun; Li, Chuang; Yu, Zhangjun; Yang, Zhe; Yuan, Yonggui; Peng, Feng; Li, Hanyang; Hou, Changbo; Zhang, Jianzhong; Yuan, Libo; Xu, Jianming; Zhang, Chao; Yu, Quanfu

    2017-08-20

    Measurement error for the polarization extinction ratio (PER) of a multifunctional integrated optic chip (MFIOC) utilizing white light interferometry was analyzed. Three influence factors derived from the all-fiber device (or optical circuit) under test were demonstrated to be the main error sources, including: 1) the axis-alignment angle (AA) of the connection point between the extended polarization-maintaining fiber (PMF) and the chip PMF pigtail; 2) the oriented angle (OA) of the linear polarizer; and 3) the birefringence dispersion of PMF and the MFIOC chip. Theoretical calculations and experimental results indicated that by controlling the AA range within 0°±5°, the OA range within 45°±2° and combining with dispersion compensation process, the maximal PER measurement error can be limited to under 1.4 dB, with the 3σ uncertainty of 0.3 dB. The variations of birefringence dispersion effect versus PMF length were also discussed to further confirm the validity of dispersion compensation. A MFIOC with the PER of ∼50  dB was experimentally tested, and the total measurement error was calculated to be ∼0.7  dB, which proved the effectiveness of the proposed error reduction methods. We believe that these methods are able to facilitate high-accuracy PER measurement.

  12. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    Science.gov (United States)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  13. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    errors can be largely compensated for by an absolute calibration of the transducers and inverse filtering that results in very small residual errors. Experimental results of this study indicate that these uncertainties will be in the order of one percent with respect to amplitude and two tenth......Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias...... of a degree for the phase. This implies that input power at a single point can be measured to within one dB in practical structures which possesses some damping. The uncertainty is increased, however, when sums of measured power contributions from more sources are to be minimised, as is the case in active...

  14. A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox

    Science.gov (United States)

    Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang

    2017-12-01

    Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.

  15. Analysis of Measured Workpiece's Form Errors Influence on the Accuracy of Probe Heads Used on Five-Axis Measuring Systems

    Directory of Open Access Journals (Sweden)

    Wiktor Harmatys

    2017-12-01

    Full Text Available The five-axis measuring systems are one of the most modern inventions in coordinate measuring technique. They are capable of performing measurements using only the rotary pairs present in their kinematic structure. This possibility is very useful because it may cause significant reduction of total measurement time and costs. However, it was noted that high values of measured workpiece's form errors may cause significant reduction of five-axis measuring system accuracy. The investigation on the relation between these two parameters was conducted in this paper and possible reasons of decrease in measurement accuracy was discussed in example of measurements of workpieces with form errors ranging from 0,5 to 1,7 millimetre.

  16. Conclusive meta-analyses on antenatal magnesium may be inconclusive! Are we underestimating the risk of random error?

    DEFF Research Database (Denmark)

    Brok, Jesper; Huusom, Lene D; Thorlund, Kristian

    2012-01-01

    Results from meta-analyses significantly influence clinical practice. Both simulation and empirical studies have demonstrated that the risk of random error (i.e. spurious chance findings) in meta-analyses is much higher than previously anticipated. Hence, authors and users of systematic reviews...... about the investigated intervention effect(s). We outline the rationale for conducting trial sequential analysis including some examples of the meta-analysis on antenatal magnesium for women at risk of preterm birth....

  17. On the impact of covariate measurement error on spatial regression modelling.

    Science.gov (United States)

    Huque, Md Hamidul; Bondell, Howard; Ryan, Louise

    2014-12-01

    Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD).

  18. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan (LMU)

    2017-03-29

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.

  19. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    Science.gov (United States)

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  20. Inclinometer Assembly Error Calibration and Horizontal Image Correction in Photoelectric Measurement Systems

    Directory of Open Access Journals (Sweden)

    Xiaofang Kong

    2018-01-01

    Full Text Available Inclinometer assembly error is one of the key factors affecting the measurement accuracy of photoelectric measurement systems. In order to solve the problem of the lack of complete attitude information in the measurement system, this paper proposes a new inclinometer assembly error calibration and horizontal image correction method utilizing plumb lines in the scenario. Based on the principle that the plumb line in the scenario should be a vertical line on the image plane when the camera is placed horizontally in the photoelectric system, the direction cosine matrix between the geodetic coordinate system and the inclinometer coordinate system is calculated firstly by three-dimensional coordinate transformation. Then, the homography matrix required for horizontal image correction is obtained, along with the constraint equation satisfying the inclinometer-camera system requirements. Finally, the assembly error of the inclinometer is calibrated by the optimization function. Experimental results show that the inclinometer assembly error can be calibrated only by using the inclination angle information in conjunction with plumb lines in the scenario. Perturbation simulation and practical experiments using MATLAB indicate the feasibility of the proposed method. The inclined image can be horizontally corrected by the homography matrix obtained during the calculation of the inclinometer assembly error, as well.

  1. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Energy Technology Data Exchange (ETDEWEB)

    DeSalvo, Riccardo, E-mail: Riccardo.desalvo@gmail.com [California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330-8332 (United States); University of Sannio, Corso Garibaldi 107, Benevento 82100 (Italy)

    2015-06-26

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G.

  2. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Shi Qiang Liu

    2016-01-01

    Full Text Available Errors compensation of micromachined-inertial-measurement-units (MIMU is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3 possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively.

  3. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    Science.gov (United States)

    Liu, Shi Qiang; Zhu, Rong

    2016-01-29

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.

  4. Estimating personal exposures from ambient air pollution measures: using meta-analysis to assess measurement error.

    Science.gov (United States)

    Holliday, Katelyn M; Avery, Christy L; Poole, Charles; McGraw, Kathleen; Williams, Ronald; Liao, Duanping; Smith, Richard L; Whitsel, Eric A

    2014-01-01

    Although ambient concentrations of particulate matter ≤10 μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on health outcome-PM exposure relationships remain poorly understood. We conducted a random-effects meta-analysis to estimate effects of study, participant, and environmental factors on r; used the estimates to impute personal exposure from ambient PM10 concentrations among 4,012 nonsmoking, participants with diabetes in the Women's Health Initiative clinical trial; and then estimated the associations of ambient and imputed personal PM10 concentrations with electrocardiographic measures, such as heart rate variability. We identified 15 studies (in years 1990-2009) of 342 participants in five countries. The median r was 0.46 (range = 0.13 to 0.72). There was little evidence of funnel plot asymmetry but substantial heterogeneity of r, which increased 0.05 (95% confidence interval = 0.01 to 0.09) per 10 µg/m increase in mean ambient PM10 concentration. Substituting imputed personal exposure for ambient PM10 concentrations shifted mean percent changes in electrocardiographic measures per 10 µg/m increase in exposure away from the null and decreased their precision, for example, -2.0% (-4.6% to 0.7%) versus -7.9% (-15.9% to 0.9%), for the standard deviation of normal-to-normal RR interval duration. Analogous distributions and heterogeneity of r in extant meta-analyses of ambient and personal PM2.5 concentrations suggest that observed shifts in mean percent change and decreases in precision may be generalizable across particle size.

  5. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  6. A New Algorithm of Compensation of the Time Interval Error GPS-Based Measurements

    Directory of Open Access Journals (Sweden)

    Jonny Paul ZAVALA DE PAZ

    2010-01-01

    Full Text Available In this paper we present a new algorithm of compensation of the time interval error (TIE applying an unbiased p-step predictive finite impulse response (FIR filter at the signal of the receiver Global Positioning System (GPS-based measurements. The practical use of the system GPS involves various inherent problems of the signal. Two of the most important problems are the TIE and the instantaneous loss of the signal of the GPS by a small interval of time, called "holdover". The error holdover is a problem that at present does not possess solution and the systems that present this type of error produce lines of erroneous synchronization in the signal of the GPS. Basic holdover algorithms are discussed along with their most critical properties. Efficiency of the predictive filter in holdover is demonstrated in applications to GPS-based measurements of the TIE.

  7. Interpolation techniques to reduce error in measurement of toe clearance during obstacle avoidance.

    Science.gov (United States)

    Heijnen, Michel J H; Muir, Brittney C; Rietdyk, Shirley

    2012-01-03

    Foot and toe clearance (TC) are used regularly to describe locomotor control for both clinical and basic research. However, accuracy of TC during obstacle crossing can be compromised by typical sample frequencies, which do not capture the frame when the foot is over the obstacle due to high limb velocities. The purpose of this study was to decrease the error of TC measures by increasing the spatial resolution of the toe trajectory with interpolation. Five young subjects stepped over an obstacle in the middle of an 8 m walkway. Position data were captured at 600 Hz as a gold standard signal (GS-600-Hz). The GS-600-Hz signal was downsampled to 60 Hz (DS-60-Hz). The DS-60-Hz was then interpolated by either upsampling or an algorithm. Error was calculated as the absolute difference in TC between GS-600-Hz and each of the remaining signals, for both the leading limb and the trailing limb. All interpolation methods reduced the TC error to a similar extent. Interpolation reduced the median error of trail TC from 5.4 to 1.1 mm; the maximum error was reduced from 23.4 to 4.2 mm (16.6-3.8%). The median lead TC error improved from 1.6 to 0.5 mm, and the maximum error improved from 9.1 to 1.8 mm (5.3-0.9%). Therefore, interpolating a 60 Hz signal is a valid technique to decrease the error of TC during obstacle crossing. Published by Elsevier Ltd.

  8. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  9. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    Science.gov (United States)

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  10. Measurement of straightness without Abbe error using an enhanced differential plane mirror interferometer.

    Science.gov (United States)

    Jin, Tao; Ji, Hudong; Hou, Wenmei; Le, Yanfen; Shen, Lu

    2017-01-20

    This paper presents an enhanced differential plane mirror interferometer with high resolution for measuring straightness. Two sets of space symmetrical beams are used to travel through the measurement and reference arms of the straightness interferometer, which contains three specific optical devices: a Koster prism, a wedge prism assembly, and a wedge mirror assembly. Changes in the optical path in the interferometer arms caused by straightness are differential and converted into phase shift through a particular interferometer system. The interferometric beams have a completely common path and space symmetrical measurement structure. The crosstalk of the Abbe error caused by pitch, yaw, and roll angle is avoided. The dead path error is minimized, which greatly enhances the stability and accuracy of the measurement. A measurement resolution of 17.5 nm is achieved. The experimental results fit well with the theoretical analysis.

  11. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method.

    Science.gov (United States)

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument.

  12. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  13. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Science.gov (United States)

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm.h-1 to 250 mm.h-1) and three di...

  14. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...

  15. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available , eccentricity and pyramidal errors of the measuring faces. Deviations in the flatness of angle surfaces have been held responsible for the lack of agreement in angle comparisons. An investigation has been carried out using a small-angle generator...

  16. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    Science.gov (United States)

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  17. Quantum Non-Demolition Singleshot Parity Measurements for a Proposed Quantum Error Correction Scheme

    Science.gov (United States)

    Petrenko, Andrei; Sun, Luyan; Leghtas, Zaki; Vlastakis, Brian; Kirchmair, Gerhard; Sliwa, Katrina; Narla, Anirudh; Hatridge, Michael; Shankar, Shyam; Blumoff, Jacob; Frunzio, Luigi; Mirrahimi, Mazyar; Devoret, Michel; Schoelkopf, Robert

    2014-03-01

    In order to be effective, a quantum error correction scheme(QEC) requires measurements of an error syndrome to be Quantum Non-Demolition (QND) and fast compared to the rate at which errors occur. Employing a superconducting circuit QED architecture, the parity of a superposition of coherent states in a cavity, or cat states, is the error syndrome for a recently proposed QEC scheme. We demonstrate the tracking of parity of cat states in a cavity and observe individual jumps of party in real-time with singleshot measurements that are much faster than the lifetime of the cavity. The projective nature of these measurements is evident when inspecting individual singleshot traces, yet when averaging the traces as an ensemble the average parity decays as predicted for a coherent state. We find our protocol to be 99.8% QND per measurement, and our sensitivity to parity jumps to be very high at 96% for an average photon number n = 1 in the cavity (85% for n = 4). Such levels of performance can already increase the lifetime of a quantum bit of information, and thereby present a promising step towards realizing a viable QEC scheme.

  18. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review

    NARCIS (Netherlands)

    Kooij, Y.E. van; Fink, A.; Nijhuis-Van der Sanden, M.W.; Speksnijder, C.M.

    2017-01-01

    STUDY DESIGN: Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. METHODS: Databases were searched for articles with key words "hand,"

  19. The reliability and measurement error of protractor-based goniometry of the fingers : A systematic review

    NARCIS (Netherlands)

    van Kooij, Yara E.; Fink, Alexandra; Nijhuis-van der Sanden, Maria W.; Speksnijder, Caroline M.|info:eu-repo/dai/nl/304821535

    2017-01-01

    Study Design: Systematic review. Purpose of the Study: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Methods: Databases were searched for articles with key words "hand,"

  20. A Study on Sixth Grade Students' Misconceptions and Errors in Spatial Measurement: Length, Area, and Volume

    Science.gov (United States)

    Tan Sisman, Gulcin; Aksu, Meral

    2016-01-01

    The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…

  1. Random and systematic errors in case-control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René

    2013-01-01

    injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case–control studies....... Relevant information was gathered from the DRUID-reports for eleven indicators for errors. The results showed that differences between the odds ratios in the DRUID case–control studies may indeed be (partially) explained by random and systematic errors. Selection bias and errors due to small sample sizes...... and cell counts were the most frequently observed errors in the six DRUID case–control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...

  2. Estimating the independent effects of multiple pollutants in the presence of measurement error: an application of a measurement-error-resistant technique.

    Science.gov (United States)

    Zeka, Ariana; Schwartz, Joel

    2004-12-01

    Misclassification of exposure usually leads to biased estimates of exposure-response associations. This is particularly an issue in cases with multiple correlated exposures, where the direction of bias is uncertain. It is necessary to address this problem when considering associations with important public health implications such as the one between mortality and air pollution, because biased exposure effects can result in biased risk assessments. The National Morbidity and Mortality Air Pollution Study (NMMAPS) recently reported results from an assessment of multiple pollutants and daily mortality in 90 U.S. cities. That study assessed the independent associations of the selected pollutants with daily mortality in two-pollutant models. Excess mortality was associated with particulate matter of aerodynamic diameter less than or equal to 10 microm/m3 (PM10), but not with other pollutants, in these two pollutant models. The extent of bias due to measurement error in these reported results is unclear. Schwartz and Coull recently proposed a method that deals with multiple exposures and, under certain conditions, is resistant to measurement error. We applied this method to reanalyze the data from NMMAPS. For PM10, we found results similar to those reported previously from NMMAPS (0.24% increase in deaths per 10-microg/m3) increase in PM10). In addition, we report an important effect of carbon monoxide that had not been observed previously.

  3. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  4. A new method to reduce truncation errors in partial spherical near-field measurements

    DEFF Research Database (Denmark)

    Cano-Facila, F J; Pivnenko, Sergey

    2011-01-01

    angular sector as well as a truncation error is present in the calculated far-field pattern within this sector. The method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward......A new and effective method for reduction of truncation errors in partial spherical near-field (SNF) measurements is proposed. The method is useful when measuring electrically large antennas, where the measurement time with the classical SNF technique is prohibitively long and an acquisition over...... hemisphere. To verify the effectiveness of the method, several examples are presented using both simulated and measured truncated near-field data....

  5. Influenza infection rates, measurement errors and the interpretation of paired serology.

    Directory of Open Access Journals (Sweden)

    Simon Cauchemez

    Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.

  6. Measuring and detecting molecular adaptation in codon usage against nonsense errors during protein translation.

    Science.gov (United States)

    Gilchrist, Michael A; Shah, Premal; Zaretzki, Russell

    2009-12-01

    Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy.

  7. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  8. Modeling and Error Compensation of Robotic Articulated Arm Coordinate Measuring Machines Using BP Neural Network

    Directory of Open Access Journals (Sweden)

    Guanbin Gao

    2017-01-01

    Full Text Available Articulated arm coordinate measuring machine (AACMM is a specific robotic structural instrument, which uses D-H method for the purpose of kinematic modeling and error compensation. However, it is difficult for the existing error compensation models to describe various factors, which affects the accuracy of AACMM. In this paper, a modeling and error compensation method for AACMM is proposed based on BP Neural Networks. According to the available measurements, the poses of the AACMM are used as the input, and the coordinates of the probe are used as the output of neural network. To avoid tedious training and improve the training efficiency and prediction accuracy, a data acquisition strategy is developed according to the actual measurement behavior in the joint space. A neural network model is proposed and analyzed by using the data generated via Monte-Carlo method in simulations. The structure and parameter settings of neural network are optimized to improve the prediction accuracy and training speed. Experimental studies have been conducted to verify the proposed algorithm with neural network compensation, which shows that 97% error of the AACMM can be eliminated after compensation. These experimental results have revealed the effectiveness of the proposed modeling and compensation method for AACMM.

  9. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    Energy Technology Data Exchange (ETDEWEB)

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  11. Partial compensation interferometry for measurement of surface parameter error of high-order aspheric surfaces

    Science.gov (United States)

    Hao, Qun; Li, Tengfei; Hu, Yao

    2018-01-01

    Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.

  12. Out-of-squareness measurement on ultra-precision machine based on the error separation

    Science.gov (United States)

    Lai, Tao; Liu, Junfeng; Chen, Shanyong; Guan, Chaoliang; Tie, Guipeng; Liao, Quan

    2017-06-01

    Traditional methods of measuring out-of-squareness of ultra-precision motion stage have many limitations, especially the errors caused by inaccuracy of standard specimens, such as bare L-square and optical pentaprism. And generally, the accurate of out-of-squareness measurement is lower than the accurate of interior angles of standard specimens. Based on the error separation, this paper presents a novel method of out-of-squareness measurement with a polygon artifact. The angles bounded with the guideways and the edges of polygon artifact are measured, and the out-of-squareness distraction is achieved by the principle that the sum of internal the angles of a convex polygon artifact is (n-2)π. A out-of-squareness metrical experiment is carried out on the profilometer by using an optical square brick with the out-of-squareness of interior angles at about 1140.2 arcsec. The results show that the measurement accuracy of three out-of-squareness of the profilometer is not affected by the internal angles. The measurementwith the method can be applied to measure the machine error more accurate and calibrate the out-of-squareness of machine.

  13. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    Science.gov (United States)

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  14. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review.

    Science.gov (United States)

    van Kooij, Yara E; Fink, Alexandra; Nijhuis-van der Sanden, Maria W; Speksnijder, Caroline M

    Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Databases were searched for articles with key words "hand," "goniometry," "reliability," and derivatives of these terms. Assessment of the methodological quality was carried out using the Consensus-Based Standards for the Selection of Health Measurement Instruments checklist. Two independent reviewers performed a best evidence synthesis based on criteria proposed by Terwee et al (2007). Fifteen articles were included. One article was of fair methodological quality, and 14 articles were of poor methodological quality. An acceptable level for reliability (intraclass correlation coefficient > 0.70 or Pearson's correlation > 0.80) was reported in 1 study of fair methodological quality and in 8 articles of low methodological quality. Because the minimal important change was not calculated in the articles, there was an unknown level of evidence for the measurement error. Further research with adequate sample sizes should focus on reference outcomes for different patient groups. For valid therapy evaluation, it is important to know if the change in range of motion reflects a real change of the patient or if this is due to the measurement error of the goniometer. Until now, there is insufficient evidence to establish this cut-off point (the smallest detectable change). Following the Consensus-Based Standards for the Selection of Health Measurement Instruments criteria, there was limited level of evidence for an acceptable reliability in the dorsal measurement method and unknown level of evidence for the measurement error. 2a. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  15. Testing capability indices for one-sided processes with measurement errors

    Directory of Open Access Journals (Sweden)

    Grau D.

    2013-01-01

    Full Text Available In the manufacturing industry, many product characteristics are of one-sided tolerances. The process capability indices Cpu (u, v and Cpl (u, v can be used to measure process performance. Most research work related to capability indices assumes no gauge measurement errors. This assumption insufficiently reflects real situations even when advanced measuring instruments are used. In this paper we show that using a critical value without taking into account these errors, severely underestimates the α-risk which causes a less accurate testing capacity. In order to improve the results we suggest the use of an adjusted critical value, and we give a Maple program to get it. An example in a polymer granulates manufactory is presented to illustrate this approach.

  16. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  17. Estimation Error in the Correlation of Two Random Variables: A Spreadsheet-Based Exposition

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2009-07-01

    Full Text Available Although the statistical term correlation is well-known across many academic disciplines, estimation error in the correlation has traditionally been considered to be a topic too difficult for students outside statistical fields. This pedagogic study presents an approach for the estimation that does not require any advanced statistical concepts. By using familiar spreadsheet functions to facilitate the required computations, it intends to make the analytical material involved accessible to more students.

  18. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    Science.gov (United States)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  20. Algorithms for High-speed Generating CRC Error Detection Coding in Separated Ultra-precision Measurement

    Science.gov (United States)

    Zhi, Z.; Tan, J. B.; Huang, X. D.; Chen, F. F.

    2006-10-01

    In order to solve the contradiction between error detection, transmission rate and system resources in data transmission of ultra-precision measurement, a kind of algorithm for high-speed generating CRC code has been put forward in this paper. Theoretical formulae for calculating CRC code of 16-bit segmented data are obtained by derivation. On the basis of 16-bit segmented data formulae, Optimized algorithm for 32-bit segmented data CRC coding is obtained, which solve the contradiction between memory occupancy and coding speed. Data coding experiments are conducted triumphantly by using high-speed ARM embedded system. The results show that this method has features of high error detecting ability, high speed and saving system resources, which improve Real-time Performance and Reliability of the measurement data communication.

  1. Does repeated palpation-digitization of pelvic landmarks for measurement of innominate motion introduce a systematic error?--A psychometric investigation.

    Science.gov (United States)

    Adhia, Divya Bharatkumar; Mani, Ramakrishnan; Milosavljevic, Stephan; Tumilty, Steve; Bussey, Melanie D

    2016-02-01

    Palpation-digitization technique for measurement of innominate motion involves repeated manual palpation-digitization of pelvic landmarks, which could introduce a systematic variation between subsequent trials and thereby influence final innominate angular measurement. The aim of this study is to quantify the effect of repeated palpation-digitization errors on overall variability of innominate vector length measurements; and to determine if there is a systematic variation between subsequent repeated trials. A single group repeated measures study, using four testers and fourteen healthy participants, was conducted. Four pelvic landmarks, left and right posterior superior iliac spine and anterior superior iliac spine, were palpated and digitized using 3D digitizing stylus of Polhemus electromagnetic tracking device, for ten consecutive trials by each tester in their random order. The ten individual trials of innominate vector lengths measured by each tester for each participant were used for the analysis. Repeated measures ANOVA demonstrated a very small effect of repeated trial factor (≤0.66%) as well as error component (≤0.32%) on innominate vector length variability. Further, residual versus order plots demonstrated a random pattern of errors across zero; thus indicating no systematic variation between subsequent trials of innominate vector length measurements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  3. The method of solution of equations with coefficients that contain measurement errors, using artificial neural network.

    Science.gov (United States)

    Zajkowski, Konrad

    This paper presents an algorithm for solving N-equations of N-unknowns. This algorithm allows to determine the solution in a situation where coefficients Ai in equations are burdened with measurement errors. For some values of Ai (where i = 1,…, N), there is no inverse function of input equations. In this case, it is impossible to determine the solution of equations of classical methods.

  4. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  5. Bayesian Semiparametric Mixture Tobit Models with Left-Censoring, Skewness and Covariate Measurement Errors

    Science.gov (United States)

    Dagne, Getachew A.; Huang, Yangxin

    2013-01-01

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection (LOD) may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models which can account for a high proportion of censored data should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left-censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left-censoring, skewness and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. PMID:23553914

  6. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  7. Interrogating cell division errors using random and chromosome-specific missegregation approaches.

    Science.gov (United States)

    Ly, Peter; Cleveland, Don W

    2017-07-03

    Accurate segregation of the duplicated genome in mitosis is essential for maintaining genetic stability. Errors in this process can cause numerical and/or structural chromosome abnormalities - hallmark genomic features commonly associated with both tumorigenesis and developmental disorders. A cell-based approach was recently developed permitting inducible missegregation of the human Y chromosome by selectively disrupting kinetochore assembly onto the Y centromere. Although this strategy initially requires several steps of genetic manipulation, it is easy to use, highly efficient and specific for the Y without affecting the autosomes or the X, and does not require cell cycle synchronization or mitotic perturbation. Here we describe currently available tools for studying chromosome segregation errors, aneuploidy, and micronuclei, as well as discuss how the Y-specific missegregation system has been used to elucidate how chromosomal micronucleation can trigger a class of extensive rearrangements termed chromothripsis. The combinatorial use of these different tools will allow unresolved aspects of cell division defects and chromosomal instability to be experimentally explored.

  8. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  9. Avoidance of large biases and large random errors in the assessment of moderate treatment effects: the need for systematic overviews.

    Science.gov (United States)

    Collins, R; Gray, R; Godwin, J; Peto, R

    1987-01-01

    In order to avoid selective biases and to minimize random errors, inference about the effects of treatment on serious endpoints needs to be based not on one, or a few, of the available trial results, but on a systematic overview of the totality of the evidence from all the relevant unconfounded randomized trials. But, only where coverage of all, or nearly all, randomized patients in all relevant trials (or a reasonably unbiased sample of such trials) can be assured, is a systematic overview of trials reasonably trustworthy, for then any selective biases are likely to be small in comparison with any moderate effects of treatment. Checks for the existence of such biases can best be conducted if reasonably detailed data are available from each trial. Future trials should take into account the results of any relevant overviews in their design, and should plan to obtain sufficient numbers of events to contribute substantially to such overviews. In many cases, this implies the need for randomized trials that are much larger than is currently standard.

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  11. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Hao, J.; Sheldon, E.

    2009-08-14

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  12. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  13. Analysis of liquid medication dose errors made by patients and caregivers using alternative measuring devices.

    Science.gov (United States)

    Ryu, Gyeong Suk; Lee, Yu Jeung

    2012-01-01

    Patients use several types of devices to measure liquid medication. Using a criterion ranging from a 10% to 40% variation from a target 5 mL for a teaspoon dose, previous studies have found that a considerable proportion of patients or caregivers make errors when dosing liquid medication with measuring devices. To determine the rate and magnitude of liquid medication dose errors that occur with patient/caregiver use of various measuring devices in a community pharmacy. Liquid medication measurements by patients or caregivers were observed in a convenience sample of community pharmacy patrons in Korea during a 2-week period in March 2011. Participants included all patients or caregivers (N = 300) who came to the pharmacy to buy over-the-counter liquid medication or to have a liquid medication prescription filled during the study period. The participants were instructed by an investigator who was also a pharmacist to select their preferred measuring devices from 6 alternatives (etched-calibration dosing cup, printed-calibration dosing cup, dosing spoon, syringe, dispensing bottle, or spoon with a bottle adapter) and measure a 5 mL dose of Coben (chlorpheniramine maleate/phenylephrine HCl, Daewoo Pharm. Co., Ltd) syrup using the device of their choice. The investigator used an ISOLAB graduated cylinder (Germany, blue grad, 10 mL) to measure the amount of syrup dispensed by the study participants. Participant characteristics were recorded including gender, age, education level, and relationship to the person for whom the medication was intended. Of the 300 participants, 257 (85.7%) were female; 286 (95.3%) had at least a high school education; and 282 (94.0%) were caregivers (parent or grandparent) for the patient. The mean (SD) measured dose was 4.949 (0.378) mL for the 300 participants. In analysis of variance of the 6 measuring devices, the greatest difference from the 5 mL target was a mean 5.552 mL for 17 subjects who used the regular (etched) dosing cup and 4

  14. Laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors for precision linear stage metrology.

    Science.gov (United States)

    Lou, Yingtian; Yan, Liping; Chen, Benyong; Zhang, Shihua

    2017-03-20

    A laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors is proposed for precision linear stage metrology. In this interferometer, the vertical straightness error and its position are measured by interference fringe counting, the yaw and pitch errors are obtained by measuring the spacing changes of interference fringe and the horizontal straightness and roll errors are determined by laser collimation. The merit of this interferometer is that four degrees of freedom motion errors are obtained by using laser interferometry with high accuracy. The optical configuration of the proposed interferometer is designed. The principle of the simultaneous measurement of six degrees of freedom errors including yaw, pitch, roll, two straightness errors and straightness error's position of measured linear stage is depicted in detail, and the compensation of crosstalk effects on straightness error and its position measurements is presented. At last, an experimental setup is constructed and several experiments are performed to demonstrate the feasibility of the proposed interferometer and the compensation method.

  15. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or

  16. Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955-2013.

    Science.gov (United States)

    Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos

    2017-01-01

    To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time. We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics. Sequence generation was assessed to be inadequate (at unclear or high risk of bias) in 68% (n = 367) of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%). Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154) and 40.5% (n = 219) of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427) of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95) of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%), while the method of blinding was appropriate in 53% (n = 286) of the trials. We identified a significant decrease over time (1955-2013) in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05) in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias) in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias. The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent

  17. The effect of clock, media, and station location errors on Doppler measurement accuracy

    Science.gov (United States)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  18. Obesity increases precision errors in dual-energy X-ray absorptiometry measurements.

    Science.gov (United States)

    Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Fogelman, Ignac; Blake, Glen M

    2012-01-01

    The precision errors of dual-energy X-ray absorptiometry (DXA) measurements are important for monitoring osteoporosis. This study investigated the effect of body mass index (BMI) on precision errors for lumbar spine (LS), femoral neck (NOF), total hip (TH), and total body (TB) bone mineral density using the GE Lunar Prodigy. One hundred two women with BMIs ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the LS, left hip, and TB with repositioning between scans. Participants were divided into 3 groups based on their BMI and the percentage coefficient of variation (%CV) calculated for each group. The %CVs for the normal (obese (>30 kg/m(2)) (n=28) BMI groups, respectively, were LS BMD: 0.99%, 1.30%, and 1.68%; NOF BMD: 1.32%, 1.37%, and 2.00%; TH BMD: 0.85%, 0.88%, and 1.06%; TB BMD: 0.66%, 0.73%, and 0.91%. Statistically significant differences in precision error between the normal and obese groups were found for LS (p=0.0006), NOF (p=0.005), and TB BMD (p=0.025). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2012 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  19. Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.

    Science.gov (United States)

    Cotton, Sue M; Crewther, David P; Crewther, Sheila G

    2005-08-01

    The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.

  20. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  1. Effective reduction of the phase error for gamma nonlinearity in phase measuring profilometry by BLPF

    Science.gov (United States)

    Zhao, Xiaxia; Mo, Rong; Chang, Zhiyong; Lu, Jin

    2018-01-01

    In phase measuring profilometry, the system gamma nonlinearity makes the captured fringe patterns non-sinusoidal, which causes the computed phase to exist a non-ignorable error and seriously affects the 3D reconstruction accuracy. Based on the detailed study of the existing gamma nonlinearity compensation and phase error reduction technique, a method based on low-pass frequency domain filtering is proposed. It mainly filters out higher than one-order harmonic components induced by the gamma nonlinearity in conditions of holding as much power as possible in the power spectrum, thus improves sinusoidal waveform of the fringe images. Compared to other compensation methods, the complex mathematic model is not needed in the proposed method. The simulation and experiments confirm that the higher-order harmonic components are significantly reduced, the phase precision can be effectively improved and a certain accuracy requirement can be reached.

  2. A proposed prototype for identifying and correcting sources of measurement error in classification systems.

    Science.gov (United States)

    McKenzie, D A

    1991-06-01

    Because many raters are generally involved in the implementation of a patient classification system, interrater reliability is always a concern in the development and use of such a system. In this article, a case example is used to demonstrate a prototype for identifying measurement error introduced at each step in the classification process (assessment, creating summary item responses, and use of these responses for categorization) and to illustrate how this identification may lead to error reduction strategies. The methods of analyses included percent agreement, Kappa, and visual inspection of contingency tables displaying interrater responses to assessment items, summary items, and the placement category. The extent to which raters followed instructions was analyzed by comparing their responses with computer-generated responses across the classification steps. In addition, raters were interviewed regarding their use of the system.

  3. Indirect measurement of machine tool motion axis error with single laser tracker

    Science.gov (United States)

    Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun

    2015-02-01

    For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.

  4. Sensitivity of the diamagnetic sensor measurements of ITER to error sources and their compensation

    Energy Technology Data Exchange (ETDEWEB)

    Fresa, R., E-mail: raffaele.fresa@unibas.it [CREATE/ENEA/Euratom Association, Scuola di Ingegneria, Università della Basilicata, Potenza (Italy); Albanese, R. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Arshad, S. [Fusion for Energy (F4E), Barcelona (Spain); Coccorese, V.; Magistris, M. de; Minucci, S.; Pironti, A.; Quercia, A.; Rubinacci, G. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Vayakis, G. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Villone, F. [CREATE/ENEA/Euratom Association, Università di Cassino, Cassino (Italy)

    2015-11-15

    Highlights: • In the paper we discuss the sensitivity analysis for the measurement system of diamagnetic flux in the ITER tokamak. • Some compensation formulas have been tested to compensate the manufacturing errors, both for the sources and the sensors. • An estimation of the poloidal beta has been carried out by estimating plasma's diamagnetism. - Abstract: The present paper is focused on the sensitivity analysis of the diamagnetic sensor measurements of ITER against several kinds of error sources, with the aim of compensating them for improving the accuracy in the evaluation of the energy confinement time and poloidal beta, via Shafranov formula. The virtual values of measurements at the diamagnetic sensors were simulated by the COMPFLUX code, a numerical code able to compute the field and flux values generated in a prescribed set of output points from massive conductors and generalized filamentary currents (with an arbitrary 3D shape and a negligible cross section) in the presence of magnetic materials. The major issue to face with has been to determine the possible deformations of sensors and electromagnetic sources. The analysis has been carried out considering the following cases: -deformed sensors and ideal EM (electromagnetic) sources; -ideal sensors and perturbed EM sources; -both sensors and EM sources perturbed. As regards the compensation, several formulas have been proposed, based on the measurements carried out by the compensation coils; they basically use the value of the flux density measured to compensate the effects of the poloidal eddy currents induced in the conducting structures surrounding the plasma. The static deviation due to sensor manufacturing and positioning errors has been evaluated, and most of the pollution of the diamagnetic flux has been compensated, meeting the prescribed specifications and tolerances.

  5. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  6. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case-control studies...... and cell counts were the most frequently observed errors in the six DRUID case-control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...

  7. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  8. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    Science.gov (United States)

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  9. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump

    Science.gov (United States)

    Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K

    2016-01-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900

  10. Inter-rater reliability and measurement error of sonographic muscle architecture assessments.

    Science.gov (United States)

    König, Niklas; Cassel, Michael; Intziegianni, Konstantina; Mayer, Frank

    2014-05-01

    Sonography of muscle architecture provides physicians and researchers with information about muscle function and muscle-related disorders. Inter-rater reliability is a crucial parameter in daily clinical routines. The aim of this study was to assess the inter-rater reliability of sonographic muscle architecture assessments and quantification of errors that arise from inconsistent probe positioning and image interpretation. The medial gastrocnemius muscle of 15 healthy participants was measured with sagittal B-mode ultrasound scans. The muscle thickness, fascicle length, superior pennation angle, and inferior pennation angle were assessed. The participants were examined by 2 investigators. A custom-made foam cast was used for standardized positioning of the probe. To analyze inter-rater reliability, the examinations of both raters were compared. The impact of probe positioning was assessed by comparison of foam cast and freehand scans. Error arising from picture interpretation was assessed by comparing the investigators' analyses of foam cast scans independently. Reliability was expressed as the intraclass correlation coefficient (ICC), inter-rater variability (IRV), Bland-Altman analysis (bias ± limits of agreement [LoA]), and standard error of measurement (SEM). Inter-rater reliability was good overall (ICC, 0.77-0.90; IRV, 9.0%-13.4%; bias ± LoA, 0.2 ± 0.2-1.7 ± 3.0). Superior and inferior pennation angles showed high systematic bias and LoA in all setups, ranging from 2.0° ± 2.2° to 3.4° ± 4.1°. The highest IRV was found for muscle thickness (13.4%). When the probe position was standardized, the SEM for muscle thickness decreased from 0.1 to 0.05 cm. Sonographic examination of muscle architecture of the medial gastrocnemius has good to high reliability. In contrast to pennation angle measurements, length measurements can be improved by standardization of the probe position.

  11. Reduction of truncation errors in partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Cano Facila, Francisco J.

    2010-01-01

    In this report, a new and effective method for reduction of truncation errors in partial spherical near-field (SNF) antenna measurements is proposed. This method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the far......-field pattern calculated from a truncated SNF measurement up to the whole forward hemisphere. The method is useful when measuring electrically large antennas and the measurement over the whole sphere is very time consuming. Therefore, a solution is considered to take samples over a portion of the spherical...... surface and then to apply the above method to reconstruct the far-field pattern. The work described in this report was carried out within the external stay of Francisco J. Cano at the Technical University of Denmark (DTU) from September 6th to December 18th in 2010....

  12. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    Science.gov (United States)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  13. Low-error and broadband microwave frequency measurement in a silicon chip

    CERN Document Server

    Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David

    2015-01-01

    Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.

  14. Preliminary Analysis of Effect of Random Segment Errors on Coronagraph Performance

    Science.gov (United States)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-01-01

    Are we alone in the Universe is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 10(exp 10) of the host stars light with a 10(exp -11) stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segmented. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  15. Preliminary analysis of effect of random segment errors on coronagraph performance

    Science.gov (United States)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  16. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    Science.gov (United States)

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  17. Backward-gazing method for heliostats shape errors measurement and calibration

    Science.gov (United States)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-06-01

    The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.

  18. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    Science.gov (United States)

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  19. Biased Random-Walk Learning A Neurobiological Correlate to Trial-and-Error

    CERN Document Server

    Anderson, R W

    1993-01-01

    Neural network models offer a theoretical testbed for the study of learning at the cellular level. The only experimentally verified learning rule, Hebb's rule, is extremely limited in its ability to train networks to perform complex tasks. An identified cellular mechanism responsible for Hebbian-type long-term potentiation, the NMDA receptor, is highly versatile. Its function and efficacy are modulated by a wide variety of compounds and conditions and are likely to be directed by non-local phenomena. Furthermore, it has been demonstrated that NMDA receptors are not essential for some types of learning. We have shown that another neural network learning rule, the chemotaxis algorithm, is theoretically much more powerful than Hebb's rule and is consistent with experimental data. A biased random-walk in synaptic weight space is a learning rule immanent in nervous activity and may account for some types of learning -- notably the acquisition of skilled movement.

  20. Reliability, technical error of measurements and validity of length and weight measurements for children under two years old in Malaysia.

    Science.gov (United States)

    Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B

    2010-06-01

    The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity.

  1. MEASURING THE INFLUENCE OF TASK COMPLEXITY ON HUMAN ERROR PROBABILITY: AN EMPIRICAL EVALUATION

    Directory of Open Access Journals (Sweden)

    LUCA PODOFILLINI

    2013-04-01

    Full Text Available A key input for the assessment of Human Error Probabilities (HEPs with Human Reliability Analysis (HRA methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs. In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations, and evaluates its use to represent (objectively and quantitatively task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs for which empirical evidence on the HEPs (albeit with large uncertainty and influencing factors are available – from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e.g., “easy” vs. “somewhat difficult”, while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1 the definitions of the PSFs cover the influences important for HRA (i.e., influencing the error probability, and 2 the quantitative relationships among PSFs and error

  2. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  3. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    Science.gov (United States)

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-02

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  4. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    Science.gov (United States)

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team

    2017-04-01

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m  =  1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small  ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n  =  1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m  =  5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world

  5. Quantification of error in optical coherence tomography central macular thickness measurement in wet age-related macular degeneration.

    Science.gov (United States)

    Ghazi, Nicola G; Kirk, Tyler; Allam, Souha; Yan, Guofen

    2009-07-01

    To assess error indicators encountered during optical coherence tomography (OCT) automated retinal thickness measurement (RTM) in neovascular age-related macular degeneration (NVAMD) before and after bevacizumab (Avastin; Genentech Inc, South San Francisco, California, USA) treatment. Retrospective observational cross-sectional study. Each of the 6 radial lines of a single Stratus fast macular OCT study before and 3 months following initiation of treatment in 46 eyes with NVAMD, for a total of 552 scans, was evaluated. Error frequency was analyzed relative to the presence of intraretinal, subretinal (SR), and subretinal pigment epithelial (SRPE) fluid. In scans with edge detection kernel (EDK) misplacement, manual caliper measurement of the central macular (CMT) and central foveal (CFT) thicknesses was performed and compared to the software-generated values. The frequency of the various types of error indicators, the risk factors for error, and the magnitude of automated RTM error were analyzed. Error indicators were found in 91.3% and 71.7% of eyes before and after treatment, respectively (P = .013). Suboptimal signal strength was the most common error indicator. EDK misplacement was the second most common type of error prior to treatment and the least common after treatment (P = .005). Eyes with SR or SRPE fluid were at the highest risk for error, particularly EDK misplacement (P = .039). There was a strong association between the software-generated and caliper-generated CMT and CFT measurements. The software overestimated measurements by up to 32% and underestimated them by up to 15% in the presence of SR and SRPE fluid, respectively. OCT errors are very frequent in NVAMD. SRF is associated with the highest risk and magnitude of error in automated CMT and CFT measurements. Manually adjusted measurements may be more reliable in such eyes.

  6. Measurement errors related to contact angle analysis of hydrogel and silicone hydrogel contact lenses.

    Science.gov (United States)

    Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole

    2009-11-01

    This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.

  7. Research on Proximity Magnetic Field Influence in Measuring Error of Active Electronic Current Transformers

    Directory of Open Access Journals (Sweden)

    Wu Weijiang

    2016-01-01

    Full Text Available The principles of the active electronic current transformer (ECT are introduced, and the mechanism of how a proximity magnetic field can influence the measuring of errors is analyzed from the perspective of the sensor section of the ECT. The impacts on active ECTs created by three-phase proximity magnetic field with invariable distance and variable distance are simulated and analyzed. The theory and simulated analysis indicate that the active ECTs are sensitive to proximity magnetic field under certain conditions. According to simulated analysis, a product structural design and the location of transformers at substation sites are suggested for manufacturers and administration of power supply, respectively.

  8. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per...... sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons...

  9. Control of Flexible Structures: Model Errors, Robustness Measures, and Optimization of Feedback Controllers

    Science.gov (United States)

    1988-10-31

    measred r~qency resonse funcion andthCSymposium on Uynaniics and Control of Large measrenireueny rspone fncton&andtheFlexible Spacecraft, VPIESU...of large W. since the where model correction term d(t) remains virtually zero. The meaaurement-minus-estimate variance is much V - an weight matriz ...differential equations. Although the measurement error coveriance matriz . Rk’ is assumed to be known, it is strictly valid J = L.j(t).j2(t).t) (9) only for an

  10. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  11. Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations.

    Science.gov (United States)

    Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing

    2017-01-25

    Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO₂ emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30-40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO₂ and NO₂ emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.

  12. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...

  13. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...

  14. Unreliability and error in the military's "gold standard" measure of sexual harassment by education and gender.

    Science.gov (United States)

    Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S

    2011-01-01

    The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.

  15. Errors in shearography measurements due to the creep of the PZT shearing actuator

    Science.gov (United States)

    Zastavnik, Filip; Pyl, Lincy; Sol, Hugo; Kersemans, Mathias; Van Paepegem, Wim

    2014-08-01

    Shearography is a modern optical interferometric measurement technique. It uses the interferometric properties of coherent laser light to measure deformation gradients on the µm m - 1 level. In the most common shearography setups, the ones employing a Michelson interferometer, the deformation gradients in both the x- and y-directions can be identified by setting angles on the shearing mirror. One of the mechanisms for setting the desired shearing angles in the Michelson interferometer is using the PZT actuators. This paper will reveal that the time-dependent creep behaviour of the PZT actuators is a major source of measurement errors. Measurements at long time spans suffer severely from this creep behaviour. Even for short time spans, which are typical for shearographic experiments, the creep behaviour of the PZT shear actuator induces considerable deviation in the measured response. In this paper the mechanism and the effect of PZT creep is explored and demonstrated with measurements. For long time-span measurements in shearography, noise is a limiting factor. Thus, the time-dependent evolution of noise is considered in this paper, with particular interest in the influence of external vibrations. Measurements with and without external vibration isolation are conducted and the difference between the two setups is analyzed. At the end of the paper some recommendations are given for minimizing and correcting the here-studied time-dependent effects.

  16. Maximal-entropy random walk unifies centrality measures

    OpenAIRE

    Ochab, J. K.

    2012-01-01

    In this paper analogies between different (dis)similarity matrices are derived. These matrices, which are connected to path enumeration and random walks, are used in community detection methods or in computation of centrality measures for complex networks. The focus is on a number of known centrality measures, which inherit the connections established for similarity matrices. These measures are based on the principal eigenvector of the adjacency matrix, path enumeration, as well as on the sta...

  17. The Importance of Tree Height in Estimating Individual Tree Biomass While Considering Errors in Measurements and Allometric Models

    OpenAIRE

    Phalla, Thuch; Ota, Tetsuji; Mizoue, Nobuya; Kajisa, Tsuyoshi; Yoshida, Shigejiro; Vuthy, Ma; Heng, Sokh

    2018-01-01

    This study evaluated the uncertainty of individual tree biomass estimated by allometric models by both including and excluding tree height independently. Using two independent sets of measurements on the same trees, the errors in the measurement of diameter at breast height and tree height were quantified, and the uncertainty of individual tree biomass estimation caused by errors in measurement was calculated. For both allometric models, the uncertainties of the individual tree biomass estima...

  18. Control chart limits based on true process capability with consideration of measurement system error

    Directory of Open Access Journals (Sweden)

    Amara Souha Ben

    2016-01-01

    Full Text Available Shewhart X̅ and R control charts and process capability indices, proven to be effective tools in statistical process control are widely used under the assumption that the measurement system is free from errors. However, measurement variability is unavoidable and may be evaluated by the measurement system discrimination ratio (DR. This paper investigates the effects of measurement system variability evaluated by DR on the process capability indices Cp and Cpm, on the expected non conforming units of product per million (ppm, on the expected mean value of the Taguchi loss function (E(Loss and on the Shewhart charts properties. It is shown that when measurement system variability is neglected, an overestimation of ppm and underestimation of E(Loss are induced. Moreover, significant effects of the measurement variability on the control chart properties were made in evidence. Therefore, control charts limits calculation methods based on process real state were developed. An example is provided in order to compare the proposed limits with those traditionally calculated for Shewhart X̅, R charts.

  19. A Secure LFSR Based Random Measurement Matrix for Compressive Sensing

    Science.gov (United States)

    George, Sudhish N.; Pattathil, Deepthi P.

    2014-11-01

    In this paper, a novel approach for generating the secure measurement matrix for compressive sensing (CS) based on linear feedback shift register (LFSR) is presented. The basic idea is to select the different states of LFSR as the random entries of the measurement matrix and normalize these values to get independent and identically distributed (i.i.d.) random variables with zero mean and variance , where N is the number of input samples. The initial seed for the LFSR system act as the key to the user to provide security. Since the measurement matrix is generated from the LFSR system, and memory overload to store the measurement matrix is avoided in the proposed system. Moreover, the proposed system can provide security maintaining the robustness to noise of the CS system. The proposed system is validated through different block-based CS techniques of images. To enhance security, the different blocks of images are measured with different measurement matrices so that the proposed encryption system can withstand known plaintext attack. A modulo division circuit is used to reseed the LFSR system to generate multiple random measurement matrices, whereby after each fundamental period of LFSR, the feedback polynomial of the modulo circuit is modified in terms of a chaotic value. The proposed secure robust CS paradigm for images is subjected to several forms of attacks and is proven to be resistant against the same. From experimental analysis, it is proven that the proposed system provides better performance than its counterparts.

  20. Errors in measurement of three-dimensional motions of the stapes using a laser Doppler vibrometer system.

    Science.gov (United States)

    Sim, Jae Hoon; Lauxmann, Michael; Chatzimichalis, Michail; Röösli, Christof; Eiber, Albrecht; Huber, Alexander M

    2010-12-01

    Previous studies have suggested complex modes of physiological stapes motions based upon various measurements. The goal of this study was to analyze the detailed errors in measurement of the complex stapes motions using laser Doppler vibrometer (LDV) systems, which are highly sensitive to the stimulation intensity and the exact angulations of the stapes. Stapes motions were measured with acoustic stimuli as well as mechanical stimuli using a custom-made three-axis piezoelectric actuator, and errors in the motion components were analyzed. The ratio of error in each motion component was reduced by increasing the magnitude of the stimuli, but the improvement was limited when the motion component was small relative to other components. This problem was solved with an improved reflectivity on the measurement surface. Errors in estimating the position of the stapes also caused errors on the coordinates of the measurement points and the laser beam direction relative to the stapes footplate, thus producing errors in the 3-D motion components. This effect was small when the position error of the stapes footplate did not exceed 5 degrees. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Internal errors of ground-based terrestrial earthshine measurements in 5 colour bands.

    Science.gov (United States)

    Thejll, Peter; Gleisner, Hans; Flynn, Chris

    2015-04-01

    Measurements of earthshine intensity could be an important complement to satellite-based observations of terrestrial visual and near-IR radiative budgets because they are independent and relatively inexpensive to obtain and also offer different potentials for long-term bias stability. Using ground-based photometric instruments, the Moon is imaged several times a night through a range of photometric filters, and the ratio of the intensities of the dark (Earth-lit) and bright (Sun-lit) sides is calculated - this ratio is proportional to terrestrial albedo. Using forward modelling of the expected ratio, given assumptions about reflectance, single-scattering albedo, and light-scattering processes it is possible to deduce the terrestrial albedo. In this poster we present multicolour photometric results from observations on 10 nights, obtained at the NOAA observatory on Mauna Loa, Hawaii, in 2011. The Moon had different phases on these nights and we discuss in detail the behaviour of internal errors as a function of phase. The internal error is dependent on the photon-statistics of the images obtained and its magnitude is investigated by use of bootstrapping with replacement of observations. Results indicate that standard Johnson B and V band equivalent Lambert albedos can be obtained with precisions (1 standard deviation) in the 0.1 to 1% range for phases between 40 and 90 degrees. For longer wavelengths, corresponding to broader bands on either side of the 'Vegetation edge' at 750nm, we see larger variability in the albedo determinations and discuss whether these are due to atmospheric conditions or represent fast, intrinsic terrestrial albedo variations. The accuracy of these results, however, appear to depend on method choices, in particular the choice of lunar reflectance model -- this 'external error' will be investigated in future analyses.

  2. Observer bias in randomized clinical trials with measurement scale outcomes

    DEFF Research Database (Denmark)

    Hróbjartsson, Asbjørn; Thomsen, Ann Sofia Skou; Emanuelsson, Frida

    2013-01-01

    conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two...

  3. Two connections between random systems and non-Gibbsian measures

    NARCIS (Netherlands)

    van Enter, A.C.D.; Kulske, C.

    2007-01-01

    In this contribution we discuss the role disordered (or random) systems have played in the study of non-Gibbsian measures. This role has two main aspects, the distinction between which has not always been fully clear: 1) From disordered systems: Disordered systems can be used as a tool; analogies

  4. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere

    Science.gov (United States)

    Vidovič, Luka; Majaron, Boris

    2013-03-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS), in which spectrally broad illumination light is multiply scattered and homogenized. The measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light at the signal output port to account for illumination field. After replacing the white standard with test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, because test samples are invariably less reflective than the white standard, such a substitution modifies the illumination field inside the IS. This leads to underestimation of the sample's reflectivity and distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of much more complex dual-beam experimental setups, involving dedicated IS, literature states that only approximate corrections of SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical way to eliminate the SBSE using IS equipped with an additional "reference" output port. Two additional measurements performed at this port (of the white standard and sample, respectively) namely enable an accurate compensation for above described alteration of the illumination field. In addition, we analyze the dependency of SBSE on sample reflectivity and illustrate its impact on measurements of DRS in human skin with a typical IS.

  5. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    Science.gov (United States)

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has

  6. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  7. The quantification and correction of wind-induced precipitation measurement errors

    Science.gov (United States)

    Kochendorfer, John; Rasmussen, Roy; Wolff, Mareile; Baker, Bruce; Hall, Mark E.; Meyers, Tilden; Landolt, Scott; Jachcik, Al; Isaksen, Ketil; Brækkan, Ragnar; Leeper, Ronald

    2017-04-01

    Hydrologic measurements are important for both the short- and long-term management of water resources. Of the terms in the hydrologic budget, precipitation is typically the most important input; however, measurements of precipitation are subject to large errors and biases. For example, an all-weather unshielded weighing precipitation gauge can collect less than 50 % of the actual amount of solid precipitation when wind speeds exceed 5 m s-1. Using results from two different precipitation test beds, such errors have been assessed for unshielded weighing gauges and for weighing gauges employing four of the most common windshields currently in use. Functions to correct wind-induced undercatch were developed and tested. In addition, corrections for the single-Alter weighing gauge were developed using the combined results of two separate sites in Norway and the USA. In general, the results indicate that the functions effectively correct the undercatch bias that affects such precipitation measurements. In addition, a single function developed for the single-Alter gauges effectively decreased the bias at both sites, with the bias at the US site improving from -12 to 0 %, and the bias at the Norwegian site improving from -27 to -4 %. These correction functions require only wind speed and air temperature as inputs, and were developed for use in national and local precipitation networks, hydrological monitoring, roadway and airport safety work, and climate change research. The techniques used to develop and test these transfer functions at more than one site can also be used for other more comprehensive studies, such as the World Meteorological Organization Solid Precipitation Intercomparison Experiment (WMO-SPICE).

  8. Right and left correlation of retinal vessel caliber measurements in anisometropic children: effect of refractive error.

    Science.gov (United States)

    Joachim, Nichole; Rochtchina, Elena; Tan, Ava Grace; Hong, Thomas; Mitchell, Paul; Wang, Jie Jin

    2012-08-07

    Previous studies have reported high right-left eye correlation in retinal vessel caliber. We test the hypothesis that right-left correlation in retinal vessel caliber would be reduced in anisometropic compared with emmetropic children. Retinal arteriolar and venular calibers were measured in 12-year-old children. Three groups were selected: group 1, both eyes emmetropic (n = 214); group 2, right-left spherical equivalent refraction (SER) difference ≥1.00 but right-left SER difference ≥2.00 D (n = 32). Pearson's correlations between the two eyes were compared between group 1 and group 2 or 3. Associations between right-left difference in refractive error and right-left difference in caliber measurements were assessed using linear regression models. Right-left correlation in group 1 was 0.57 for central retinal arteriolar equivalent (CRAE) and 0.70 for central retinal venular equivalent (CRVE) compared with 0.60 and 0.82 for CRAE and CRVE, respectively, in group 2 (P = 0.42 and P = 0.08), and 0.36 and 0.52, respectively, in group 3 (P = 0.08 and P = 0.07, referenced to group 1). Each 1.00-D increase in right-left SER difference was associated with a 0.74-μm increase in mean CRAE difference (P = 0.02) and a 1.23-μm increase in mean CRVE difference between the two eyes (P = 0.002). Each 0.1-mm increase in right-left difference in axial length was associated with a 0.21-μm increase in the mean difference in CRAE (P = 0.01) and a 0.42-μm increase in the mean difference in CRVE (P < 0.0001) between the two eyes. Refractive error ≥2.00 D may contribute to variation in measurements of retinal vessel caliber.

  9. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  10. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    Science.gov (United States)

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  11. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  12. Method to resolve microphone and sample location errors in the two-microphone duct measurement method

    Science.gov (United States)

    Katz

    2000-11-01

    Utilizing the two-microphone impedance tube method, the normal incidence acoustic absorption and acoustic impedance can be measured for a given sample. This method relies on the measured transfer function between two microphones, and the knowledge of their precise location relative to each other and the sample material. In this article, a method is proposed to accurately determine these locations. A third sensor is added at the end of the tube to simplify the measurement. First, a justification and investigation of the method is presented. Second, reference terminations are measured to evaluate the accuracy of the apparatus. Finally, comparisons are made between the new method and current methods for determining these distances and the variations are discussed. From this, conclusions are drawn with regards to the applicability and need for the new method and under which circumstances it is applicable. Results show that the method provides a reliable determination of both microphone locations, which is not possible using the current techniques. Errors due to inaccurate determinination of these parameters between methods were on the order of 3% for R and 12% for Re Z.

  13. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    Directory of Open Access Journals (Sweden)

    Julio Illade-Quinteiro

    2015-02-01

    Full Text Available Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution.

  14. Perceptual, durational and tongue displacement measures following articulation therapy for rhotic sound errors.

    Science.gov (United States)

    Bressmann, Tim; Harper, Susan; Zhylich, Irina; Kulkarni, Gajanan V

    2016-01-01

    Outcomes of articulation therapy for rhotic errors are usually assessed perceptually. However, our understanding of associated changes of tongue movement is limited. This study described perceptual, durational and tongue displacement changes over 10 sessions of articulation therapy for /ɹ/ in six children. Four of the participants also received ultrasound biofeedback of their tongue shape. Speech and tongue movement were recorded pre-therapy, after 5 sessions, in the final session and at a one month follow-up. Perceptually, listeners perceived improvement and classified more productions as /ɹ/ in the final and follow-up assessments. The durations of VɹV syllables at the midway point of the therapy were longer. Cumulative tongue displacement increased in the final session. The average standard deviation was significantly higher in the middle and final assessments. The duration and tongue displacement measures illustrated how articulation therapy affected tongue movement and may be useful for outcomes research about articulation therapy.

  15. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

    Science.gov (United States)

    Weir, Kent A.; Wells, Eugene M.

    1990-01-01

    The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

  16. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  17. Reduction of truncation errors in planar, cylindrical, and partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel

    2012-01-01

    A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...... hemisphere. The extension of the valid region is achieved by the iterative application of atransformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave...... spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar...

  18. Analysis of misclassified correlated binary data using a multivariate probit model when covariates are subject to measurement error.

    Science.gov (United States)

    Roy, Surupa; Banerjee, Tathagata

    2009-06-01

    A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and/or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.

  19. Assessing discharge measurement errors at a gauging station in a small catchment (Vallcebre, Eastern Pyrenees)

    Science.gov (United States)

    Nord, G.; Martín-Vide, J. P.; Latron, J.; Soler, M.; Gallart, F.

    2009-04-01

    The Cal Rodó catchment (4.17km2) is located in a Mediterranean mountain area. Land cover is dominated by pastures and forest and badlands represent 2.8% of the surface of the catchment. Elevation ranges between 1100m and 1650m and average annual precipitation is about 900mm with heterogeneous distribution along the year. Autumn and spring are the seasons with more precipitation. Flash floods are relatively frequent, especially in autumn and are associated with high sediment transport. The period of observation ranges from 1994 to 2008. Discharge is measured in a gauging station controlled by a two levels rectangular notch weir with two different widths and contraction conditions that ensure a unique relationship between flow depth and discharge. The structure, designed to flush sediment, enables to capture a wide range of discharge. Flow depth is measured using a pressure sensor. Instantaneous discharge was lower than 0.1m3/s approximately 95% of the time and higher than 0.5 m3/s approximately 1% of the time. The largest runoff event measured produced instantaneous discharge of approximately 10m3/s. The second level of the gauging station was rarely reached since it was flooded in average 1.5 times per year but the corresponding events contributed to approximately 60% of the sediment transport. The structure is efficient as it was never submerged over the observed period and sediment deposition was negligible but it has a complex shape that makes difficult to relate accurately water depth to discharge, especially for large runoff events. In situ measurement of discharge by current meters or chemical dilution during high water stages is very unfeasible due to the flashiness of the response. Therefore, a hydraulic physical model (scale 1:11) was set up and calibrated to improve the stage-discharge curve and estimate the measurement errors of discharge. Sources of errors taken into account in this study are related to the precision and calibration of the pressure

  20. Genetic properties of residual feed intakes for maintenance and growth and the implications of error measurement.

    Science.gov (United States)

    Rekaya, R; Aggrey, S E

    2015-03-01

    A procedure for estimating residual feed intake (RFI) based on information used in feeding studies is presented. Koch's classical model consists of using fixed regressions of feed intake on metabolic BW and growth, and RFI is obtained as the deviation between the observed feed intake and the expected intake for an individual with a given weight and growth rate. Estimated RFI following such a procedure intrinsically suffers from the inability to separate true RFI from the sampling error. As the latter is never equal to 0, estimated RFI is always biased, and the magnitude of such bias depends on the ratio between the true RFI variance and the residual variance. Additionally, the classical approach suffers from its inability to dissect RFI into its biological components, being the metabolic efficiency (maintaining BW) and growth efficiency. To remedy these problems we proposed a procedure that directly models the individual animal variation in feed efficiency used for body maintenance and growth. The proposed model is an extension of Koch's procedure by assuming animal-specific regression coefficients rather than population-level parameters. To evaluate the performance of both models, a data simulation was performed using the structure of an existing chicken data set consisting of 2,289 records. Data was simulated using 4 ratios between the true RFI and sampling error variances (1:1, 2:1, 4:1, and 10:1) and 5 correlation values between the 2 animal-specific random regression coefficients (-0.95, -0.5, 0, 0.5, and 0.95). The results clearly showed the superiority of the proposed model compared to Koch's procedure under all 20 simulation scenarios. In fact, when the ratio was 1:1 and the true genetic correlation was equal to -0.95, the correlation between the true and estimated RFI for animals in the top 20% was 0.60 and 0.51 for the proposed and Koch's models, respectively. This is an 18% superiority for the proposed model. For the bottom 20% of animals in the ranking

  1. Distributed Fusion Filtering in Networked Systems with Random Measurement Matrices and Correlated Noises

    Directory of Open Access Journals (Sweden)

    Raquel Caballero-Águila

    2015-01-01

    Full Text Available The distributed fusion state estimation problem is addressed for sensor network systems with random state transition matrix and random measurement matrices, which provide a unified framework to consider some network-induced random phenomena. The process noise and all the sensor measurement noises are assumed to be one-step autocorrelated and different sensor noises are one-step cross-correlated; also, the process noise and each sensor measurement noise are two-step cross-correlated. These correlation assumptions cover many practical situations, where the classical independence hypothesis is not realistic. Using an innovation methodology, local least-squares linear filtering estimators are recursively obtained at each sensor. The distributed fusion method is then used to form the optimal matrix-weighted sum of these local filters according to the mean squared error criterion. A numerical simulation example shows the accuracy of the proposed distributed fusion filtering algorithm and illustrates some of the network-induced stochastic uncertainties that can be dealt with in the current system model, such as sensor gain degradation, missing measurements, and multiplicative noise.

  2. Non Random Distribution of DMD Deletion Breakpoints and Implication of Double Strand Breaks Repair and Replication Error Repair Mechanisms.

    Science.gov (United States)

    Marey, Isabelle; Ben Yaou, Rabah; Deburgrave, Nathalie; Vasson, Aurélie; Nectoux, Juliette; Leturcq, France; Eymard, Bruno; Laforet, Pascal; Behin, Anthony; Stojkovic, Tanya; Mayer, Michèle; Tiffreau, Vincent; Desguerre, Isabelle; Boyer, François Constant; Nadaj-Pakleza, Aleksandra; Ferrer, Xavier; Wahbi, Karim; Becane, Henri-Marc; Claustres, Mireille; Chelly, Jamel; Cossee, Mireille

    2016-05-27

    Dystrophinopathies are mostly caused by copy number variations, especially deletions, in the dystrophin gene (DMD). Despite the large size of the gene, deletions do not occur randomly but mainly in two hot spots, the main one involving exons 45 to 55. The underlying mechanisms are complex and implicate two main mechanisms: Non-homologous end joining (NHEJ) and micro-homology mediated replication-dependent recombination (MMRDR). Our goals were to assess the distribution of intronic breakpoints (BPs) in the genomic sequence of the main hot spot of deletions within DMD gene and to search for specific sequences at or near to BPs that might promote BP occurrence or be associated with DNA break repair. Using comparative genomic hybridization microarray, 57 deletions within the intron 44 to 55 region were mapped. Moreover, 21 junction fragments were sequenced to search for specific sequences. Non-randomly distributed BPs were found in introns 44, 47, 48, 49 and 53 and 50% of BPs clustered within genomic regions of less than 700bp. Repeated elements (REs), known to promote gene rearrangement via several mechanisms, were present in the vicinity of 90% of clustered BPs and less frequently (72%) close to scattered BPs, illustrating the important role of such elements in the occurrence of DMD deletions. Palindromic and TTTAAA sequences, which also promote DNA instability, were identified at fragment junctions in 20% and 5% of cases, respectively. Micro-homologies (76%) and insertions or deletions of small sequences were frequently found at BP junctions. Our results illustrate, in a large series of patients, the important role of RE and other genomic features in DNA breaks, and the involvement of different mechanisms in DMD gene deletions: Mainly replication error repair mechanisms, but also NHEJ and potentially aberrant firing of replication origins. A combination of these mechanisms may also be possible.

  3. Reducing the impact of measurement errors in FRF-based substructure decoupling using a modal model

    Science.gov (United States)

    Peeters, P.; Manzato, S.; Tamarozzi, T.; Desmet, W.

    2018-01-01

    As the vibro-acoustic requirements of modern products become more stringent, the need for robust identification methods increases proportionally. Sometimes the identification of a component is greatly complicated by the presence of a supporting structure that cannot be removed during testing. This is where substructure decoupling finds its main applications. However, despite some recent advances in substructure decoupling, the number of successful applications has so far been limited. The main reason for this is the poor conditioning of the problem that tends to amplify noise and other measurement errors. This paper proposes a new approach that uses a modal model to filter the experimental frequency response functions (FRFs). This can reduce the impact of noise and mass loading considerably for decoupling applications and decrease the quality requirements for experimental data. Furthermore, based on the uncertainty of the observed eigenfrequencies, an arbitrary number of consistent (all FRFs exhibit exactly the same poles) FRF matrices can be generated that are all contained within the variation of the original measurement. This way, the variation that is observed within the measurement is taken into account. The result is a distribution of decoupled FRFs of which the average can be used as the decoupled FRF set while the spread on the results highlights the sensitivity or reliability of the obtained results. After briefly reintroducing the theory of FRF-based substructure decoupling, the main problems in decoupling are summarized. Afterwards, the new methodology is presented and tested on both numerical and experimental cases.

  4. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    Science.gov (United States)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  5. Valuing urban open space using the travel-cost method and the implications of measurement error.

    Science.gov (United States)

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  7. Obesity increases precision errors in total body dual-energy x-ray absorptiometry measurements.

    Science.gov (United States)

    Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Shallcross, Andrew; Fogelman, Ignac; Blake, Glen M

    2015-01-01

    Total body (TB) dual-energy X-ray absorptiometry (DXA) is increasingly being used to measure body composition in research and clinical settings. This study investigated the effect of body mass index (BMI) and body fat on precision errors for total and regional TB DXA measurements of bone mineral density, fat tissue, and lean tissue using the GE Lunar Prodigy (GE Healthcare, Bedford, UK). One hundred forty-four women with BMI's ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the TB with repositioning between examinations. Participants were divided into 3 groups based on their BMI, and the root mean square standard deviation and the percentage coefficient of variation were calculated for each group. The root mean square standard deviation (percentage coefficient of variation) for the normal (obese (>30 kg/m²; n = 32) BMI groups, respectively, were total BMD (g/cm(2)): 0.009 (0.77%), 0.009 (0.69%), 0.011 (0.91%); total fat (g): 545 (2.98%), 486 (1.72%), 677 (1.55%); total lean (g): 551 (1.42%), 540 (1.34%), and 781 (1.68%). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2015 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  8. Statistical method for quality control in presence of measurement errors; Methodes statistiques pour le controle de qualite en presence d`erreurs de mesure

    Energy Technology Data Exchange (ETDEWEB)

    Lauer-Peccoud, M.R

    1998-12-31

    In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by {integral} (G) + {xi} where f is an increasing function such that the value {integral} (g{sub 0}) is known and {xi} is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function {integral} is linear and the error {xi} and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author) 42 refs.

  9. 50 nm AlxOy resistive random access memory array program bit error reduction and high temperature operation

    Science.gov (United States)

    Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2014-01-01

    In order to decrease program bit error rate (BER) of array-level operation in AlxOy resistive random access memory (ReRAM), program BERs are compared by using 4 × 4 basic set and reset with verify methods on multiple 1024-bit-pages in 50 nm, mega-bit class ReRAM arrays. Further, by using an optimized reset method, 8.5% total BER reduction is obtained after 104 write cycles due to avoiding under-reset or weak reset and ameliorating over-reset caused wear-out. Then, under-set and over-set are analyzed by tuning the set word line voltage (VWL) of ±0.1 V. Moderate set current shows the best total BER. Finally, 2000 write cycles are applied at 125 and 25 °C, respectively. Reset BER increases 28.5% at 125 °C whereas set BER has little difference, by using the optimized reset method. By applying write cycles over a 25 to 125 to 25 °C temperature variation, immediate reset BER change can be found after the temperature transition.

  10. Prediction of rainfall intensity measurement errors using commercial microwave communication links

    Directory of Open Access Journals (Sweden)

    A. Zinevich

    2010-10-01

    Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.

    In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.

    The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.

    The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple

  11. Impedance measurement using a two-microphone, random-excitation method

    Science.gov (United States)

    Seybert, A. F.; Parrott, T. L.

    1978-01-01

    The feasibility of using a two-microphone, random-excitation technique for the measurement of acoustic impedance was studied. Equations were developed, including the effect of mean flow, which show that acoustic impedance is related to the pressure ratio and phase difference between two points in a duct carrying plane waves only. The impedances of a honeycomb ceramic specimen and a Helmholtz resonator were measured and compared with impedances obtained using the conventional standing-wave method. Agreement between the two methods was generally good. A sensitivity analysis was performed to pinpoint possible error sources and recommendations were made for future study. The two-microphone approach evaluated in this study appears to have some advantages over other impedance measuring techniques.

  12. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    Directory of Open Access Journals (Sweden)

    Umberto Esposito

    Full Text Available Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  13. Multicollinearity and Measurement Error in Structural Equation Models: Implications for Theory Testing

    OpenAIRE

    Rajdeep Grewal; Joseph A. Cote; Hans Baumgartner

    2004-01-01

    The literature on structural equation models is unclear on whether and when multicollinearity may pose problems in theory testing (Type II errors). Two Monte Carlo simulation experiments show that multicollinearity can cause problems under certain conditions, specifically: (1) when multicollinearity is extreme, Type II error rates are generally unacceptably high (over 80%), (2) when multicollinearity is between 0.6 and 0.8, Type II error rates can be substantial (greater than 50% and frequent...

  14. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations....

  15. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  16. Impact of random pointing and tracking errors on the design of coherent and incoherent optical intersatellite communication links

    Science.gov (United States)

    Chen, Chien-Chung; Gardner, Chester S.

    1989-01-01

    Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.

  17. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    Science.gov (United States)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  18. Setup error and motion during deep inspiration breath-hold breast radiotherapy measured with continuous portal imaging

    DEFF Research Database (Denmark)

    Lutz, Christina Maria; Poulsen, Per Rugaard; Fledelius, Walther

    2016-01-01

    ). At every third treatment fraction, continuous portal images were acquired. The time-resolved chest wall position during treatment was compared with the planned position to determine the inter-fraction setup errors and the intra-fraction motion of the chest wall. RESULTS: The DIBH compliance was 95% during...... both recruitment periods. A tendency of smaller inter-fraction setup errors and intra-fraction motion was observed for group 2 (medial marker block position). However, apart from a significantly reduced inter-field random shift (σ = 1.7 mm vs. σ = 0.9 mm, p = 0.005), no statistically significant...... differences between the groups were found. In a combined analysis, the group mean inter-fraction setup error was M = - 0.1 mm, with random and systematic errors of σ = 1.7 mm and Σ = 1.4 mm. The group mean inter-field shift was M = 0.0 (σ = 1.3 mm and Σ = 1.1 mm) and the group mean standard deviation...

  19. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  20. A Measurement Error Model for Physical Activity Level as Measured by a Questionnaire With Application to the 1999–2006 NHANES Questionnaire

    OpenAIRE

    Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S

    2013-01-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was adminis...

  1. Analysis of the sources of error in the determination of sound power based on sound intensity measurements

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Jacobsen, Finn

    2010-01-01

    the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...

  2. A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13

    Science.gov (United States)

    Holdzkom, David; Sumner, Brian; McMillen, Brad

    2010-01-01

    In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…

  3. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    CERN Document Server

    Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A

    2016-01-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....

  4. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  5. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    Science.gov (United States)

    Sweeney, R. M.; Frassinetti, L.; Brunsell, P.; Fridström, R.; Volpe, F. A.

    2016-12-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of m/n  =  1/-12, where m and n are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the modified Rutherford equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e.g. based on the best EF-cancelling RMP, resulting in the fastest TM rotation). A passive variant of this technique is also presented, where no RMPs are applied, and the EF phase is deduced.

  6. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  7. Echocardiographic methods, quality review, and measurement accuracy in a randomized multicenter clinical trial of Marfan syndrome.

    Science.gov (United States)

    Selamet Tierney, Elif Seda; Levine, Jami C; Chen, Shan; Bradley, Timothy J; Pearson, Gail D; Colan, Steven D; Sleeper, Lynn A; Campbell, M Jay; Cohen, Meryl S; De Backer, Julie; Guey, Lin T; Heydarian, Haleh; Lai, Wyman W; Lewin, Mark B; Marcus, Edward; Mart, Christopher R; Pignatelli, Ricardo H; Printz, Beth F; Sharkey, Angela M; Shirali, Girish S; Srivastava, Shubhika; Lacro, Ronald V

    2013-06-01

    The Pediatric Heart Network is conducting a large international randomized trial to compare aortic root growth and other cardiovascular outcomes in 608 subjects with Marfan syndrome randomized to receive atenolol or losartan for 3 years. The authors report here the echocardiographic methods and baseline echocardiographic characteristics of the randomized subjects, describe the interobserver agreement of aortic measurements, and identify factors influencing agreement. Individuals aged 6 months to 25 years who met the original Ghent criteria and had body surface area-adjusted maximum aortic root diameter (ROOTmax) Z scores > 3 were eligible for inclusion. The primary outcome measure for the trial is the change over time in ROOTmaxZ score. A detailed echocardiographic protocol was established and implemented across 22 centers, with an extensive training and quality review process. Interobserver agreement for the aortic measurements was excellent, with intraclass correlation coefficients ranging from 0.921 to 0.989. Lower interobserver percentage error in ROOTmax measurements was independently associated (model R(2) = 0.15) with better image quality (P = .002) and later study reading date (P < .001). Echocardiographic characteristics of the randomized subjects did not differ by treatment arm. Subjects with ROOTmaxZ scores ≥ 4.5 (36%) were more likely to have mitral valve prolapse and dilation of the main pulmonary artery and left ventricle, but there were no differences in aortic regurgitation, aortic stiffness indices, mitral regurgitation, or left ventricular function compared with subjects with ROOTmaxZ scores < 4.5. The echocardiographic methodology, training, and quality review process resulted in a robust evaluation of aortic root dimensions, with excellent reproducibility. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  8. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  9. Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs.

    Science.gov (United States)

    Schmidt, Deena R; Thomas, Peter J

    2014-04-17

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.

  10. Bit error rate analysis of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams with the help of random phase screens.

    Science.gov (United States)

    Eyyuboğlu, Halil T

    2014-06-10

    Using the random phase screen approach, we carry out a simulation analysis of the probability of error performance of Gaussian, annular Gaussian, cos Gaussian, and cosh Gaussian beams. In our scenario, these beams are intensity-modulated by the randomly generated binary symbols of an electrical message signal and then launched from the transmitter plane in equal powers. They propagate through a turbulent atmosphere modeled by a series of random phase screens. Upon arriving at the receiver plane, detection is performed in a circuitry consisting of a pin photodiode and a matched filter. The symbols detected are compared with the transmitted ones, errors are counted, and from there the probability of error is evaluated numerically. Within the range of source and propagation parameters tested, the lowest probability of error is obtained for the annular Gaussian beam. Our investigation reveals that there is hardly any difference between the aperture-averaged scintillations of the beams used, and the distinctive advantage of the annular Gaussian beam lies in the fact that the receiver aperture captures the maximum amount of power when this particular beam is launched from the transmitter plane.

  11. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  12. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    Science.gov (United States)

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission

  13. Measuring the effect of inter-study variability on estimating prediction error.

    Directory of Open Access Journals (Sweden)

    Shuyi Ma

    Full Text Available The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in "batch-effects" and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies.Here we quantify the impact of these combined "study-effects" on a disease signature's predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV, which extracts random subsets of samples for testing, and inter-study validation (ISV, which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance.As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification.We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when "sufficient" diversity has been achieved for learning a

  14. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    Science.gov (United States)

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  15. A study of turbulent fluxes and their measurement errors for different wind regimes over the tropical Zongo Glacier (16° S during the dry season

    Directory of Open Access Journals (Sweden)

    M. Litt

    2015-08-01

    Full Text Available Over glaciers in the outer tropics, during the dry winter season, turbulent fluxes are an important sink of melt energy due to high sublimation rates, but measurements in stable surface layers in remote and complex terrains remain challenging. Eddy-covariance (EC and bulk-aerodynamic (BA methods were used to estimate surface turbulent heat fluxes of sensible (H and latent heat (LE in the ablation zone of the tropical Zongo Glacier, Bolivia (16° S, 5080 m a.s.l., from 22 July to 1 September 2007. We studied the turbulent fluxes and their associated random and systematic measurement errors under the three most frequent wind regimes. For nightly, density-driven katabatic flows, and for strong downslope flows related to large-scale forcing, H generally heats the surface (i.e. is positive, while LE cools it down (i.e. is negative. On average, both fluxes exhibit similar magnitudes and cancel each other out. Most energy losses through turbulence occur for daytime upslope flows, when H is weak due to small temperature gradients and LE is strongly negative due to very dry air. Mean random errors of the BA method (6 % on net H + LE fluxes originated mainly from large uncertainties in roughness lengths. For EC fluxes, mean random errors were due mainly to poor statistical sampling of large-scale outer-layer eddies (12 %. The BA method is highly sensitive to the method used to derive surface temperature from longwave radiation measurements and underestimates fluxes due to vertical flux divergence at low heights and nonstationarity of turbulent flow. The EC method also probably underestimates the fluxes, albeit to a lesser extent, due to underestimation of vertical wind speed and to vertical flux divergence. For both methods, when H and LE compensate each other in downslope fluxes, biases tend to cancel each other out or remain small. When the net turbulent fluxes (H + LE are the largest in upslope flows, nonstationarity effects and underestimations of the

  16. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  17. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  18. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes (n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  19. Apparently conclusive meta-analyses may be inconclusive--Trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses

    DEFF Research Database (Denmark)

    Brok, Jesper; Thorlund, Kristian; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due...

  20. Error analysis of aspheric surface with reference datum.

    Science.gov (United States)

    Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng

    2015-07-20

    Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.

  1. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  2. Spatio-Temporal Error Sources Analysis and Accuracy Improvement in Landsat 8 Image Ground Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Chao Ding

    2016-11-01

    Full Text Available Because of the advantages of low cost, large coverage and short revisit cycle, Landsat 8 images have been widely applied to monitor earth surface movements. However, there are few systematic studies considering the error source characteristics or the improvement of the deformation field accuracy obtained by Landsat 8 image. In this study, we utilize the 2013 Mw 7.7 Balochistan, Pakistan earthquake to analyze error spatio-temporal characteristics and elaborate how to mitigate error sources in the deformation field extracted from multi-temporal Landsat 8 images. We found that the stripe artifacts and the topographic shadowing artifacts are two major error components in the deformation field, which currently lack overall understanding and an effective mitigation strategy. For the stripe artifacts, we propose a small spatial baseline (<200 m method to avoid the stripe artifacts effect on the deformation field. We also propose a small radiometric baseline method to reduce the topographic shadowing artifacts and radiometric decorrelation noises. Those performances and accuracy evaluation show that these two methods are effective in improving the precision of deformation field. This study provides the possibility to detect subtle ground movement with higher precision caused by earthquake, melting glaciers, landslides, etc., with Landsat 8 images. It is also a good reference for error source analysis and corrections in deformation field extracted from other optical satellite images.

  3. Random walks in the quarter-plane: invariant measures and performance bounds

    NARCIS (Netherlands)

    Chen, Y.

    2015-01-01

    This monograph focuses on random walks in the quarter-plane. Such random walks are frequently used to model queueing systems and the invariant measure of a random walk is of major importance in studying the performance of these systems. In special cases the invariant measure of a random walk can be

  4. The impact of crown-rump length measurement error on combined Down syndrome screening: a simulation study.

    Science.gov (United States)

    Salomon, L J; Bernard, M; Amarsy, R; Bernard, J P; Ville, Y

    2009-05-01

    To evaluate the impact of a 5-mm error in the measurement of crown-rump length (CRL) in a woman undergoing ultrasound and biochemistry sequential combined screening for Down syndrome. Based on existing risk calculation algorithms, we simulated the case of a 35-year-old-woman undergoing combined screening based on nuchal translucency (NT) measurement and early second-trimester maternal serum markers (human chorionic gonadotropin (hCG) and alpha-fetoprotein (AFP) expressed as multiples of the median (MoM)). Two measurement errors were considered (+ or - 5 mm), for four different CRLs (50, 60, 70 and 80 mm), with five different NT measurements (1, 1.5, 2, 2.5 and 3 mm) in a patient undergoing biochemistry testing at 14 + 4, 15, 16, 17 or 18 weeks' gestation. Four different values for each maternal serum marker were tested (1, 1.5, 2 and 2.5 MoM for hCG, and 0.5, 0.8, 1 and 1.5 MoM for AFP), leading to a total of 3200 simulations of the impact of measurement error. In all cases the ratio between the risk as assessed with or without the measurement error was calculated (measurement error-related risk ratio (MERR)). Over 3200 simulated cases, MERR ranged from 0.53 to 2.14. In 586 simulations (18.3%), it was 1.33. Based on a risk cut-off of 1/300, women would have been misclassified in 112 simulations (3.5%). This would go up to 33 (27.5%) out of the 120 simulations in women with 'borderline' risk, with 1.5 MoM for hCG and 0.5 MoM for AFP, and NT measurement of 1 or 2mm. Down syndrome screening may be highly sensitive to measurement errors in CRL. Quality control of CRL measurement should be performed together with quality control of NT measurement in order to provide the highest standard of care.

  5. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently

  6. Analysis of the Largest Normalized Residual Test Robustness for Measurements Gross Errors Processing in the WLS State Estimator

    Directory of Open Access Journals (Sweden)

    Breno Carvalho

    2013-10-01

    Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.

  7. The Use of PCs, Smartphones, and Tablets in a Probability-Based Panel Survey : Effects on Survey Measurement Error

    NARCIS (Netherlands)

    Lugtig, Peter; Toepoel, Vera

    2016-01-01

    Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using

  8. Interval estimation for rank correlation coefficients based on the probit transformation with extension to measurement error correction of correlated ranked data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2007-02-10

    The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations. 2006 John Wiley & Sons, Ltd.

  9. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  10. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  11. Recruitment into diabetes prevention programs: what is the impact of errors in self-reported measures of obesity?

    Science.gov (United States)

    Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A

    2012-07-08

    Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.

  12. Recruitment into diabetes prevention programs: what is the impact of errors in self-reported measures of obesity?

    Directory of Open Access Journals (Sweden)

    Hernan Andrea

    2012-07-01

    Full Text Available Abstract Background Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP. Methods The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004–05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC and Body Mass Index (BMI from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Results Overall, calculation errors impacted on eligibility in 18 cases (2.1%. Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC in self-report were found in n = 90 (32.3%. These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = −0.226, p-value  Conclusions Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.

  13. MEASUREMENT ERROR EFFECT ON THE POWER OF THE CONTROL CHART FOR ZERO-TRUNCATED BINOMIAL DISTRIBUTION UNDER STANDARDIZATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2014-12-01

    Full Text Available Measurement error effect on the power of control charts for zero truncated Poisson distribution and ratio of two Poisson distributions are recently studied by Chakraborty and Khurshid (2013a and Chakraborty and Khurshid (2013b respectively. In this paper, in addition to the expression for the power of control chart for ZTBD based on standardized normal variate is obtained, numerical calculations are presented to see the effect of errors on the power curve. To study the sensitivity of the monitoring procedure, average run length (ARL is also considered.

  14. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna...... is reliable only within a certain portion of the visible region. Accordingly, the truncation error is reduced by extrapolating the remaining portion of the visible region by the Gerchberg-Papoulis iterative algorithm, exploiting a condition of spatial concentration of the fields on the antenna aperture plane...

  15. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    by the antenna only within a certain region inside the visible range. Then, the truncation error is reduced by a Maxwellian continuation of the reliable portion of the spectrum: after back propagating the measured field to the antenna plane, a condition of spatial concentration of the primary field is exploited......A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  16. Internal Consistency, Test–Retest Reliability and Measurement Error of the Self-Report Version of the Social Skills Rating System in a Sample of Australian Adolescents

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test–retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID). PMID:24040116

  17. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  18. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    Science.gov (United States)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  19. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  20. Measuring and detecting errors in occupational coding: an analysis of SHARE data

    NARCIS (Netherlands)

    Belloni, M.; Brugiavini, A.; Meschi, E.; Tijdens, K.

    2016-01-01

    This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE) using a

  1. Human error views : a framework for benchmarking organizations and measuring the distance between academia and industry

    NARCIS (Netherlands)

    Karanikas, Nektarios

    2015-01-01

    The paper presents a framework that through structured analysis of accident reports explores the differences between practice and academic literature as well amongst organizations regarding their views on human error. The framework is based on the hypothesis that the wording of accident reports

  2. Correction of error in two-dimensional wear measurements of cemented hip arthroplasties

    NARCIS (Netherlands)

    The, Bertram; Mol, Linda; Diercks, Ron L.; van Ooijen, Peter M. A.; Verdonschot, Nico

    The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We

  3. Determination and error analysis of emittance and spectral emittance measurements by remote sensing. [of leaves, soil and plant canopies

    Science.gov (United States)

    Kumar, R.

    1977-01-01

    Theoretical and experimental determinations of the emittance of soils and leaves are reviewed, and an error analysis of emittance and spectral emittance measurements is developed as an aid to remote sensing applications. In particular, an equation for the upper bound of the absolute error in an emittance determination is derived. The absolute error is found to decrease with an increase in contact temperature and to increase with an increase in environmental integrated radiant flux density. The difference between temperature and band radiance temperature is plotted as a function of emittance for the wavelength intervals 4.5 to 5.5 microns, 8 to 13.5 microns and 10.2 to 12.5 microns.

  4. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  5. A Measurement Error Model for Physical Activity Level as Measured by a Questionnaire With Application to the 1999–2006 NHANES Questionnaire

    Science.gov (United States)

    Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S.

    2013-01-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40–69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999–2000). Valid estimates of participants’ total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level (“truth”). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32–0.41); attenuation factors (0.43–0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error–adjusted estimates of relationships between physical activity and disease. PMID:23595007

  6. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  8. [Efficacy of motivational interviewing for reducing medication errors in chronic patients over 65 years with polypharmacy: Results of a cluster randomized trial].

    Science.gov (United States)

    Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger

    2014-10-21

    To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  9. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  11. Use of graph theory measures to identify errors in record linkage.

    Science.gov (United States)

    Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B

    2014-07-01

    Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Evolution of the genetic code: partial optimization of a random code for robustness to translation error in a rugged fitness landscape.

    Science.gov (United States)

    Novozhilov, Artem S; Wolf, Yuri I; Koonin, Eugene V

    2007-10-23

    The standard genetic code table has a distinctly non-random structure, with similar amino acids often encoded by codons series that differ by a single nucleotide substitution, typically, in the third or the first position of the codon. It has been repeatedly argued that this structure of the code results from selective optimization for robustness to translation errors such that translational misreading has the minimal adverse effect. Indeed, it has been shown in several studies that the standard code is more robust than a substantial majority of random codes. However, it remains unclear how much evolution the standard code underwent, what is the level of optimization, and what is the likely starting point. We explored possible evolutionary trajectories of the genetic code within a limited domain of the vast space of possible codes. Only those codes were analyzed for robustness to translation error that possess the same block structure and the same degree of degeneracy as the standard code. This choice of a small part of the vast space of possible codes is based on the notion that the block structure of the standard code is a consequence of the structure of the complex between the cognate tRNA and the codon in mRNA where the third base of the codon plays a minimum role as a specificity determinant. Within this part of the fitness landscape, a simple evolutionary algorithm, with elementary evolutionary steps comprising swaps of four-codon or two-codon series, was employed to investigate the optimization of codes for the maximum attainable robustness. The properties of the standard code were compared to the properties of four sets of codes, namely, purely random codes, random codes that are more robust than the standard code, and two sets of codes that resulted from optimization of the first two sets. The comparison of these sets of codes with the standard code and its locally optimized version showed that, on average, optimization of random codes yielded evolutionary

  13. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    OpenAIRE

    Francisco Moreira; Sousa, Rui M., ed. lit.; Celina P Leão; Anabela C Alves; Lima, Rui M.

    2009-01-01

    This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE) process implemented in the Integrated Master degree on Industrial Management and Engineering (IME) at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be ...

  14. Innovative measurement of parallelism for parallel transparent plate based on optical scanning holography by using a random-phase pupil.

    Science.gov (United States)

    Luo-Zhi, Zhang; Jian-Ping, Hu; Dao-Ming, Wan; Xing, Zeng; Chun-Miao, Li; Xin, Zhou

    2015-03-20

    A potential method is proposed to measure the parallelism of parallel transparent plate with an advanced lower limit and a convenient process by optical scanning holography (OSH) using a random-phase pupil, which is largely distinct from traditional methods. As a new possible application of OSH, this promising method is demonstrated theoretically and numerical simulations are carried out on a 2  cm×2  cm parallel plate. Discussions are also made on the quality of reconstructed image as well as local mean square error (MSE), which are closely related with the parallelism of sample. These amounts may become the judgments of parallelism, while in most interference methods judgments are paces between two interference fringes. In addition, randomness of random-phase pupil also affects the quality of reconstructed image and local MSE. According to the simulation results, high parallelism usually brings about distinguishable reconstructed information and suppressed local MSE.

  15. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  16. Measuring Edge Importance: A Quantitative Analysis of the Stochastic Shielding Approximation for Random Processes on Graphs

    Science.gov (United States)

    2014-01-01

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077

  17. Error Analysis of High Frequency Core Loss Measurement for Low-Permeability Low-Loss Magnetic Cores

    DEFF Research Database (Denmark)

    Niroumand, Farideh Javidi; Nymand, Morten

    2016-01-01

    in magnetic cores is B-H loop measurement where two windings are placed on the core under test. However, this method is highly vulnerable to phase shift error, especially for low-permeability, low-loss cores. Due to soft saturation and very low core loss, low-permeability low-loss magnetic cores are favorable...... in many of the high-efficiency high power-density power converters. Magnetic powder cores, among the low-permeability low-loss cores, are very attractive since they possess lower magnetic losses in compared to gapped ferrites. This paper presents an analytical study of the phase shift error in the core....... The analysis has been validated by experimental measurements for relatively low-loss magnetic cores with different permeability values....

  18. From transmission error measurement to Pulley-Belt slip determination in serpentine belt drives : influence of tensioner and belt characteristics

    OpenAIRE

    Manin, Lionel; Michon, Guilhem; Rémond, Didier; Dufour, Regis

    2009-01-01

    Serpentine belt drives are often used in front end accessory drive of automotive engine. The accessories resistant torques are getting higher within new technological innovations as stater-alternator, and belt transmissions are always asked for higher capacity. Two kind of tensioners are used to maintain minimum tension that insure power transmission and minimize slip: dry friction or hydraulic tensioners. An experimental device and a specific transmission error measurement method have been u...

  19. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    Science.gov (United States)

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  20. Measuring spatial transmission of white maize prices between South Africa and Mozambique: An asymmetric error correction model approach

    OpenAIRE

    Acosta, Alejandro

    2012-01-01

    Over the last decade, Mozambique has experienced drastic increases in food prices, with serious implications for households’ real income. A deeper understanding of how food prices are spatially transmitted from global to domestic markets is thus fundamental for designing policy measures to reduce poverty and food insecurity. This study assesses the spatial transmission of white maize prices between South Africa and Mozambique using an asymmetric error correction model to estimate the speed ...

  1. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1976-01-01

    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  2. Image registration error variance as a measure of overlay quality. [satellite data processing

    Science.gov (United States)

    Mcgillem, C. D.; Svedlow, M.

    1976-01-01

    When one image (the signal) is to be registered with a second image (the signal plus noise) of the same scene, one would like to know the accuracy possible for this registration. This paper derives an estimate of the variance of the registration error that can be expected via two approaches. The solution in each instance is found to be a function of the effective bandwidth of the signal and the noise, and the signal-to-noise ratio. Application of these results to LANDSAT-1 data indicates that for most cases, registration variances will be significantly less than the diameter of one picture element.

  3. Noninvasive Techniques for Blood Pressure Measurement Are Not a Reliable Alternative to Direct Measurement: A Randomized Crossover Trial in ICU

    Directory of Open Access Journals (Sweden)

    Sara Ribezzo

    2014-01-01

    Full Text Available Introduction. Noninvasive blood pressure (NIBP monitoring methods are widely used in critically ill patients despite poor evidence of their accuracy. The erroneous interpretations of blood pressure (BP may lead to clinical errors. Objectives. To test the accuracy and reliability of aneroid (ABP and oscillometric (OBP devices compared to the invasive BP (IBP monitoring in an ICU population. Materials and Methods. Fifty adult patients (200 comparisons were included in a randomized crossover trial. BP was recorded simultaneously by IBP and either by ABP or by OBP, taking IBP as gold standard. Results. Compared with ABP, IBP systolic values were significantly higher (mean difference ± standard deviation 9.74±13.8; P<0.0001. Both diastolic (-5.13±7.1; P<0.0001 and mean (-2.14±7.1; P=0.0033 IBP were instead lower. Compared with OBP, systolic (10.80±14.9; P<0.0001 and mean (5.36±7.1; P<0.0001 IBP were higher, while diastolic IBP (-3.62±6.0; P<0.0001 was lower. Bland-Altman plots showed wide limits of agreement in both NIBP-IBP comparisons. Conclusions. BP measurements with different devices produced significantly different results. Since in critically ill patients the importance of BP readings is often crucial, noninvasive techniques cannot be regarded as reliable alternatives to direct measurements.

  4. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  5. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    Science.gov (United States)

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  6. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects

    Directory of Open Access Journals (Sweden)

    Karyn Heavner

    2015-08-01

    Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  7. Use of rigid-body motion for the investigation and estimation of the measurement errors related to digital image correlation technique

    Science.gov (United States)

    Haddadi, H.; Belhabib, S.

    2008-02-01

    The aim of this work is to investigate the sources of errors related to digital image correlation (DIC) technique applied to strain measurements. The knowledge of such information is important before the measured kinematic fields can be exploited. After recalling the principle of DIC, some sources of errors related to this technique are listed. Both numerical and experimental tests, based on rigid-body motion, are proposed. These tests are simple and easy-to-implement. They permit to quickly assess the errors related to lighting, the optical lens (distortion), the CCD sensor, the out-of-plane displacement, the speckle pattern, the grid pitch, the size of the subset and the correlation algorithm. The errors sources that cannot be uncoupled were estimated by amplifying their contribution to the global error. The obtained results permit to address a classification of the error related to the used equipment. The paper ends by some suggestions proposed in order to minimize the errors.

  8. On Measurement of Efficiency of Cobb-Douglas Production Function with Additive and Multiplicative Errors

    Directory of Open Access Journals (Sweden)

    Md. Moyazzem Hossain

    2015-02-01

    Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.

  9. Measuring Residential Segregation With the ACS: How the Margin of Error Affects the Dissimilarity Index.

    Science.gov (United States)

    Napierala, Jeffrey; Denton, Nancy

    2017-02-01

    The American Community Survey (ACS) provides valuable, timely population estimates but with increased levels of sampling error. Although the margin of error is included with aggregate estimates, it has not been incorporated into segregation indexes. With the increasing levels of diversity in small and large places throughout the United States comes a need to track accurately and study changes in racial and ethnic segregation between censuses. The 2005-2009 ACS is used to calculate three dissimilarity indexes (D) for all core-based statistical areas (CBSAs) in the United States. We introduce a simulation method for computing segregation indexes and examine them with particular regard to the size of the CBSAs. Additionally, a subset of CBSAs is used to explore how ACS indexes differ from those computed using the 2000 and 2010 censuses. Findings suggest that the precision and accuracy of D from the ACS is influenced by a number of factors, including the number of tracts and minority population size. For smaller areas, point estimates systematically overstate actual levels of segregation, and large confidence intervals lead to limited statistical power.

  10. Quantitative shearography: error reduction by using more than three measurement channels

    OpenAIRE

    Charrett, Thomas O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-01

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for q...

  11. Narrowband (LPC-10) Vocoder Performance under Combined Effects of Random Bit Errors and Jet Aircraft Cabin Noise.

    Science.gov (United States)

    1983-12-01

    Environment 52 34. Comparison of Regression Lines Estimating Scores for the Sustention Intelligibility Feature vs Bit Error Rate for the DOD LPC-10 Vocoder in...both conditions, the feature "sibilation" obtained the highest scores, and the features "graveness" and " sustention " received the poorest scores, but...were under much greater impairment in the noise environment. Details of the variations in scores for sustention are shown in Figure 34, and, for

  12. A measure of the impact of CV incompleteness on prediction error estimation with application to PCA and normalization.

    Science.gov (United States)

    Hornung, Roman; Bernau, Christoph; Truntzer, Caroline; Wilson, Rory; Stadler, Thomas; Boulesteix, Anne-Laure

    2015-11-04

    In applications of supervised statistical learning in the biomedical field it is necessary to assess the prediction error of the respective prediction rules. Often, data preparation steps are performed on the dataset-in its entirety-before training/test set based prediction error estimation by cross-validation (CV)-an approach referred to as "incomplete CV". Whether incomplete CV can result in an optimistically biased error estimate depends on the data preparation step under consideration. Several empirical studies have investigated the extent of bias induced by performing preliminary supervised variable selection before CV. To our knowledge, however, the potential bias induced by other data preparation steps has not yet been examined in the literature. In this paper we investigate this bias for two common data preparation steps: normalization and principal component analysis for dimension reduction of the covariate space (PCA). Furthermore we obtain preliminary results for the following steps: optimization of tuning parameters, variable filtering by variance and imputation of missing values. We devise the easily interpretable and general measure CVIIM ("CV Incompleteness Impact Measure") to quantify the extent of bias induced by incomplete CV with respect to a data preparation step of interest. This measure can be used to determine whether a specific data preparation step should, as a general rule, be performed in each CV iteration or whether an incomplete CV procedure would be acceptable in practice. We apply CVIIM to large collections of microarray datasets to answer this question for normalization and PCA. Performing normalization on the entire dataset before CV did not result in a noteworthy optimistic bias in any of the investigated cases. In contrast, when performing PCA before CV, medium to strong underestimates of the prediction error were observed in multiple settings. While the investigated forms of normalization can be safely performed before CV, PCA

  13. Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals

    Science.gov (United States)

    2014-08-01

    activity in the past decade (see the website [ DSP ] or the references in the monographs [EK12, FR13]), it is now well known that when A consists of, say...since then, see [ DSP ] for a growing list of literature in this area. Several efficient recovery algorithms have been proposed, based on linear...developed above. The computations, performed in MATLAB , are reproducible and can be downloaded from the second author’s webpage. The random

  14. Error analysis for resonant thermonuclear reaction rates

    CERN Document Server

    Thompson, W J

    1999-01-01

    A detailed presentation is given of estimating uncertainties in thermonuclear reaction rates for stellar nucleosynthesis involving narrow resonances, starting from random errors in measured or calculated resonance and nuclear level properties. Special attention is given to statistical matters such as probability distributions, error propagation, and correlations between errors. Interpretation of resulting uncertainties in reaction rates and the distinction between symmetric and asymmetric errors are also discussed. Computing reaction rate uncertainties is described. We give examples from explosive nucleosynthesis by hydrogen burning on light nuclei.

  15. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    Science.gov (United States)

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.

  16. Snow Precipitation Measured by Gauges: Systematic Error Estimation and Data Series Correction in the Central Italian Alps

    Directory of Open Access Journals (Sweden)

    Giovanna Grossi

    2017-06-01

    Full Text Available Precipitation measurements by rain gauges are usually affected by a systematic underestimation, which can be larger in case of snowfall. The wind, disturbing the trajectory of the falling water droplets or snowflakes above the rain gauge, is the major source of error, but when tipping-bucket recording gauges are used, the induced evaporation due to the heating device must also be taken into account. Manual measurements of fresh snow water equivalent (SWE were taken in Alpine areas of Valtellina and Vallecamonica, in Northern Italy, and compared with daily precipitation and melted snow measured by manual precipitation gauges and by mechanical and electronic heated tipping-bucket recording gauges without any wind-shield: all of these gauges underestimated the SWE in a range between 15% and 66%. In some experimental monitoring sites, instead, electronic weighing storage gauges with Alter-type wind-shields are coupled with snow pillows data: daily SWE measurements from these instruments are in good agreement. In order to correct the historical data series of precipitation affected by systematic errors in snowfall measurements, a simple ‘at-site’ and instrument-dependent model was first developed that applies a correction factor as a function of daily air temperature, which is an index of the solid/liquid precipitation type. The threshold air temperatures were estimated through a statistical analysis of snow field observations. The correction model applied to daily observations led to 5–37% total annual precipitation increments, growing with altitude (1740 ÷ 2190 m above sea level, a.s.l. and wind exposure. A second ‘climatological‘ correction model based on daily air temperature and wind speed was proposed, leading to errors only slightly higher than those obtained for the at-site corrections.

  17. Measuring milk fat content by random laser emission

    Science.gov (United States)

    Abegão, Luis M. G.; Pagani, Alessandra A. C.; Zílio, Sérgio C.; Alencar, Márcio A. R. C.; Rodrigues, José J.

    2016-10-01

    The luminescence spectra of milk containing rhodamine 6G are shown to exhibit typical signatures of random lasing when excited with 532 nm laser pulses. Experiments carried out on whole and skim forms of two commercial brands of UHT milk, with fat volume concentrations ranging from 0 to 4%, presented lasing threshold values dependent on the fat concentration, suggesting that a random laser technique can be developed to monitor such important parameter.

  18. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  19. MAX-DOAS measurements of HONO slant column densities during the MAD-CAT campaign: inter-comparison, sensitivity studies on spectral analysis settings, and error budget

    Science.gov (United States)

    Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas

    2017-10-01

    retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.

  20. Effects of exposure measurement error in the analysis of health effects from traffic-related air pollution.

    Science.gov (United States)

    Baxter, Lisa K; Wright, Rosalind J; Paciorek, Christopher J; Laden, Francine; Suh, Helen H; Levy, Jonathan I

    2010-01-01

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO(2)), fine particulate matter (PM(2.5)), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R(2) (0.3 to 0.4 for the previously developed multiple regression models for PM(2.5) and NO(2)) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g., EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research.