WorldWideScience

Sample records for range measurement error

  1. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  2. Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS

    Science.gov (United States)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.

  3. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  4. Systematic errors in the measurement of adsorption isotherms by frontal analysis Impact of the choice of column hold-up volume, range and density of the data points.

    Science.gov (United States)

    Gritti, Fabrice; Guiochon, Georges

    2005-12-02

    Besides the accuracy and the precision of the measurements of the data points, several important parameters affect the accuracy of the adsorption isotherms that are derived from the data acquired by frontal analysis (FA). The influence of these parameters is discussed. First, the effects of the width of the concentration range within which the adsorption data are measured and of the distribution of the data points in this range are investigated. Systematic elimination of parts of the data points before the calculation of the nonlinear regression of the data to the model illustrates the importance of the numbers of data points (1) within the linear range and (2) at high concentrations. The influence of the inaccuracy of the estimate of the column hold-up volume on each adsorption data point, on the selection of the isotherm model, and on the best estimates of the adsorption isotherm parameters is also stressed. Depending on the method used to measure it, the hold-up time can vary by more than 10%. The high concentration part of the adsorption isotherm is particularly sensitive to errors made on t(0,exp) and as a result, when the isotherm follows bi-Langmuir isotherm behavior, the equilibrium constant of the low-energy sites may change by a factor 2. This study shows that the agreement between calculated and experimental overloaded band profiles is a necessary condition to validate the choice of an adsorption model and the calculation of its numerical parameters but that this condition is not sufficient.

  5. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  6. Geometric errors measurement for coordinate measuring machines

    Science.gov (United States)

    Pan, Fangyu; Nie, Li; Bai, Yuewei; Wang, Xiaogang; Wu, Xiaoyan

    2017-08-01

    Error compensation is a good choice to improve Coordinate Measuring Machines’ (CMM) accuracy. In order to achieve the above goal, the basic research is done. Firstly, analyzing the error source which finds out 21 geometric errors affecting CMM’s precision seriously; secondly, presenting the measurement method and elaborating the principle. By the experiment, the feasibility is validated. Therefore, it lays a foundation for further compensation which is better for CMM’s accuracy.

  7. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  8. Radar range measurements in the atmosphere.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2013-02-01

    The earths atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.

  9. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  10. Q-circle measurement error

    Science.gov (United States)

    Hearn, Chase P.; Bradshaw, Edward S.

    1991-05-01

    High-Q lumped and distributed networks near resonance are generally modeled as elementary three element RLC circuits. The widely used Q-circle measurement technique is based on this assumption. It is shown that this assumption can lead to errors when measuring the Q-factor of more complex resonators, particularly when heavily loaded by the external source. In the Q-circle technique, the resonator is assumed to behave as a pure series (or parallel) RLC circuit and the intercept frequencies are found experimentally at which the components of impedance satisfy the absolute value of Im(Z) = Re(Z) (unloaded Q) and absolute value of Im(Z) = Ro+Re(Z) (loaded Q). The Q-factor is then determined as the ratio of the resonant frequency to the intercept bandwidth. This relationship is exact for simple series or parallel RLC circuits, regardless of the Q-factor, but not for more complex circuits. This is shown to be due to the fact that the impedance components of the circuit vary with frequency differently from those in a pure series RLC circuit, causing the Q-factor as determined above to be in error.

  11. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  12. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  13. Computation of the Different Errors in the Ballistic Missiles Range

    OpenAIRE

    Abd El-Salam, F. A.; Abd El-Bar, S. E.

    2011-01-01

    The ranges of the ballistic missile trajectories are very sensitive to any kind of errors. Most of the missile trajectory is a part of an elliptical orbit. In this work, the missile problem is stated. The variations in the orbital elements are derived using Lagrange planetary equations. Explicit expressions for the errors in the missile range due to the in-orbit plane changes are derived. Explicit expressions for the errors in the missile range due to the out-of-orbit plane changes are derive...

  14. Protecting weak measurements against systematic errors

    OpenAIRE

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-01-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution, and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe u...

  15. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  16. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  17. Influence of measurement error on Maxwell's demon

    Science.gov (United States)

    Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.

    2017-06-01

    In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .

  18. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x-measurements......Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species...... is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...

  19. Measuring Systematic Error with Curve Fits

    Science.gov (United States)

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model.1-3 In this paper I give three examples in which my students use popular curve-fitting software and adjust the theoretical model to account for, and even exploit, the presence of systematic errors in measured data.

  20. Measurement Error in Education and Growth Regressions*

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  1. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  2. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  3. Direction of dependence in measurement error models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Merkle, Edgar C; von Eye, Alexander

    2017-09-05

    Methods to determine the direction of a regression line, that is, to determine the direction of dependence in reversible linear regression models (e.g., x→y vs. y→x), have experienced rapid development within the last decade. However, previous research largely rested on the assumption that the true predictor is measured without measurement error. The present paper extends the direction dependence principle to measurement error models. First, we discuss asymmetric representations of the reliability coefficient in terms of higher moments of variables and the attenuation of skewness and excess kurtosis due to measurement error. Second, we identify conditions where direction dependence decisions are biased due to measurement error and suggest method of moments (MOM) estimation as a remedy. Third, we address data situations in which the true outcome exhibits both regression and measurement error, and propose a sensitivity analysis approach to determining the robustness of direction dependence decisions against unreliably measured outcomes. Monte Carlo simulations were performed to assess the performance of MOM-based direction dependence measures and their robustness to violated measurement error assumptions (i.e., non-independence and non-normality). An empirical example from subjective well-being research is presented. The plausibility of model assumptions and links to modern causal inference methods for observational data are discussed. © 2017 The British Psychological Society.

  4. Protecting weak measurements against systematic errors

    Science.gov (United States)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  5. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  6. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  7. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  8. Test-Cost-Sensitive Attribute Reduction of Data with Normal Distribution Measurement Errors

    OpenAIRE

    Hong Zhao; Fan Min; William Zhu

    2013-01-01

    The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making based on attribute values of objects, we shall select an attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distrib...

  9. Error Separation for Wide Area Film Measurement

    Directory of Open Access Journals (Sweden)

    Shujie LIU

    2014-09-01

    Full Text Available We wanted to use a multiple probes and white light interferometer to measure the surface profile of thin film. However, this system, as assessed with a scanning method, suffers from the presence of a moving stage and systematic sensor errors. In this paper, in order to separate measurement error caused by the moving stage and systematic sensor errors, the least squares analysis is applied to achieve self-calibration in the measurement process. The modeling principle and resolution process of the least squares analysis with multiple probes and autocollimator are introduced and the corresponding theory uncertainty calculation method is also given. Using this method, we analysis the experimental data and obtain a shape close to the real file. Contrasting with the actual value, the bias and uncertainty in the case of different number of probes are discussed. The results demonstrated the feasibility of the constructed multi-ball cantilever system with the autocollimator for measuring thin film with high accuracy.

  10. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    around zero and thicker tails than a normal distribution. In a linear regression model where the explanatory variable is measured with error it is well-known that this gives a downward bias in the absolute value of the corresponding regression parameter (attenuation), Friedman (1957). In non......-linear models it is more difficult to obtain an expression for the bias as it depends on the distribution of the true underlying variable as well as the error distribution. Chesher (1991) give some approximations to very general non-linear models and Stefanski & Carroll (1985) in the logistic regression model...... and the distribution of the underlying true income is skewed then there are valid technical instruments. We investigate how this IV estimation approach works in theory and illustrate it by simulation studies using the findings about the measurement error model for income from the NTS....

  11. Multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J

    2014-11-10

    Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Accurate test limits under nonnormal measurement error

    NARCIS (Netherlands)

    Albers, Willem/Wim; Kallenberg, W.C.M.; Otten, G.D.

    1998-01-01

    When screening a production process for nonconforming items the objective is to improve the average outgoing quality level. Due to measurement errors specification limits cannot be checked directly and hence test limits are required, which meet some given requirement, here given by a prescribed

  13. Application of Uniform Measurement Error Distribution

    Science.gov (United States)

    2016-03-18

    specific distribution and the associated joint probability density function ( PDF ). Then, assuming uniformly distributed measurement errors, we will try...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME...calibration tolerance limits but the difference of the observed measurement results of the UUT and the Calibration Standard (CalStd or CAL) is within

  14. Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging

    DEFF Research Database (Denmark)

    Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben

    2013-01-01

    When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device......, the shortcomings of using a spinner mounted LIDAR for yaw error estimation are discussed. The extended simulation study shows that with the applied method, the yaw error can be estimated with a precision of a few degrees, even in highly turbulent flows. Applying the method to experimental data reveals an average...... yaw error of approximately 9° during a period of 2 h, and good correlation is seen between LIDAR-based estimates and met-mast data. The final discussion suggests a number of challenges of the method when applied to measurements in complex flow. Copyright © 2012 John Wiley & Sons, Ltd....

  15. Measurement error in longitudinal film badge data

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, J.L

    2002-04-01

    The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study

  16. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  17. Measuring the severity of prescribing errors: a systematic review.

    Science.gov (United States)

    Garfield, Sara; Reynolds, Matthew; Dermont, Liesbeth; Franklin, Bryony Dean

    2013-12-01

    . A wide range of severity assessment tools are used in the literature. Developing a basis of comparison between tools would potentially be helpful in comparing findings across studies. There is a potential need to establish a less time-consuming method of measuring severity of prescribing error, with acceptable international reliability and validity.

  18. Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware

    Science.gov (United States)

    Winnitoy, Susan

    2012-01-01

    Located at the NASA Johnson Space Center in Houston, TX, the Six-Degree-of-Freedom Dynamic Test System (SDTS) is a real-time, six degree-of-freedom, short range motion base simulator originally designed to simulate the relative dynamics of two bodies in space mating together (i.e., docking or berthing). The SDTS has the capability to test full scale docking and berthing systems utilizing a two body dynamic docking simulation for docking operations and a Space Station Remote Manipulator System (SSRMS) simulation for berthing operations. The SDTS can also be used for nonmating applications such as sensors and instruments evaluations requiring proximity or short range motion operations. The motion base is a hydraulic powered Stewart platform, capable of supporting a 3,500 lb payload with a positional accuracy of 0.03 inches. The SDTS is currently being used for the NASA Docking System testing and has been also used by other government agencies. The SDTS is also under consideration for use by commercial companies. Examples of tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility integrates a dynamic simulation of on-orbit spacecraft mating or de-mating using flight-like mechanical interface hardware. A force moment sensor is used for input during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents unique challenges, one particular area of interest involves the use of external measurement systems to ensure accurate feedback of dynamic contact. The measurement systems for the test facility have two separate functions. The first is to take static measurements of facility and test hardware to determine both the static and moving frames used in the simulation and control system. The test hardware must be measured after each configuration change to determine both sets of reference frames. The second function is to take dynamic

  19. Feedback cooling, measurement errors, and entropy production

    Science.gov (United States)

    Munakata, T.; Rosinberg, M. L.

    2013-06-01

    The efficiency of a feedback mechanism depends on the precision of the measurement outcomes obtained from the controlled system. Accordingly, measurement errors affect the entropy production in the system. We explore this issue in the context of active feedback cooling by modeling a typical cold damping setup as a harmonic oscillator in contact with a heat reservoir and subjected to a velocity-dependent feedback force that reduces the random motion. We consider two models that distinguish whether the sensor continuously measures the position of the resonator or directly its velocity (in practice, an electric current). Adopting the standpoint of the controlled system, we identify the ‘entropy pumping’ contribution that describes the entropy reduction due to the feedback control and that modifies the second law of thermodynamics. We also assign a relaxation dynamics to the feedback mechanism and compare the apparent entropy production in the system and the heat bath (under the influence of the controller) to the total entropy production in the super-system that includes the controller. In this context, entropy pumping reflects the existence of hidden degrees of freedom and the apparent entropy production satisfies fluctuation theorems associated with an effective Langevin dynamics.

  20. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  1. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    In the face of seeming dearth of objective methods of estimating measurement error variance and realistically adjusting for the incidence of measurement errors in multilevel models, researchers often indulge in the traditional approach of arbitrary choice of measurement error variance and this has the potential of giving ...

  2. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  3. Incorporating measurement error in n=1 psychological autoregressive modeling

    NARCIS (Netherlands)

    Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive

  4. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  5. Measuring Systematic Error with Curve Fits

    Science.gov (United States)

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  6. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    Science.gov (United States)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  7. The combined measurement and compensation technology for robot motion error

    Science.gov (United States)

    Li, Rui; Qu, Xinghua; Deng, Yonggang; Liu, Bende

    2013-10-01

    Robot parameter errors are mainly caused by the kinematic parameter errors and the moving angle errors. The calibration of the kinematic parameter errors and the regularity of each axis moving angle errors are mainly researched in this paper. The errors can be compensated by the error model through pre-measurement. So robot kinematic system accuracy can be improved in the case where there are no external devices for real-time measurement. Combination measuring system which is based on the laser tracker and the biaxial orthogonal inertial measuring instrument is designed and built in the paper. The laser tracker is used to build the robot kinematic parameter error model which is based on the minimum constraint of distance error. The biaxial orthogonal inertial measuring instrument is used to obtain the moving angle error model of each axis. The model is preset when the robot is moving in the predetermined path to get the exam movement error and the compensation quantity is feedback to robot controller module of moving axis to compensation the angle. The robot kinematic parameter calibration bases on distance error model and the distribution law of each axis movement error are discussed in this paper. The laser tracker is applied to prove that the method can effectively improve the control accuracy of the robot system.

  8. Error Averaging Effect in Parallel Mechanism Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Peng-Hao Hu

    2016-11-01

    Full Text Available Error averaging effect is one of the advantages of a parallel mechanism when individual errors are relatively large. However, further investigation is necessary to clarify the evidence with mathematical analysis and experiment. In the developed parallel coordinate measuring machine (PCMM, which is based on three pairs of prismatic-universal-universal joints (3-PUU, error averaging mechanism was investigated and is analyzed in this report. Firstly, the error transfer coefficients of various errors in the PCMM were studied based on the established error transfer model. It can be shown how the various original errors in the parallel mechanism are averaged and reduced. Secondly, experimental measurements were carried out, including angular errors and straightness errors of three moving sliders. Lastly, solving the inverse kinematics by numerical method of iteration, it can be seen that the final measuring errors of the moving platform of PCMM can be reduced by the error averaging effect in comparison with the attributed geometric errors of three moving slides. This study reveals the significance of the error averaging effect for a PCMM.

  9. Rapid mapping of volumetric machine errors using distance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  10. Color speckle measurement errors using system with XYZ filters

    Science.gov (United States)

    Kinoshita, Junichi; Yamamoto, Kazuhisa; Kuroda, Kazuo

    2017-09-01

    Measurement errors of color speckle are analyzed for a measurement system equipped with revolving XYZ filters and a 2D sensor. One of the errors is caused by the filter characteristics unfitted to the ideal color matching functions. The other is caused by uncorrelations among the optical paths via the XYZ filters. The unfitted color speckle errors of all the pixel data can be easily calibrated by conversion between the measured BGR chromaticity triangle and the true triangle obtained by the BGR wavelength measurements. For the uncorrelated errors, the measured BGR chromaticity values spread over around the true values. As a result, it would be more complicated to calibrate the uncorrelated errors, repeating the triangular conversion pixel by pixel. Color speckle and its errors greatly affect also chromaticity measurements and image quality of displays using coherent light sources.

  11. Error analysis of sensor measurements in a small UAV

    OpenAIRE

    Ackerman, James S.

    2005-01-01

    This thesis focuses on evaluating the measurement errors in the gimbal system of the SUAV autonomous aircraft developed at NPS. These measurements are used by the vision based target position estimation system developed at NPS. Analysis of the errors inherent in these measurements will help direct future investment in better sensors to improve the estimation system's performance.

  12. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  13. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  14. Pressure Change Measurement Leak Testing Errors

    Energy Technology Data Exchange (ETDEWEB)

    Pryor, Jeff M [ORNL; Walker, William C [ORNL

    2014-01-01

    A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.

  15. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  16. Compact ranges in antenna and RCS measurements

    Science.gov (United States)

    Audone, B.

    1989-09-01

    With the increased complexity and extended frequency range of operation model measurements and far field test ranges are no longer suitable to satisfy the demand of accurate testing. Moreover plane wave test conditions are required for Radar Cross Section (RCS) measurements which represent a key point in stealth technology. Compact ranges represent the best test facilities available presently since they allow for indoor measurements under far field conditions in real time without any calculation effort. Several types of compact ranges are described and compared discussing their relevant advantages with regard to RCS and antenna measurements. In parallel to measuring systems sophisticated computer models were developed with such a high level of accuracy that it is questionable whether experiments give better results than theory. Tests performed on simple structures show the correlation between experimental results and theoretical ones derived on the basis of GTD computer codes.

  17. Compensation for straightness measurement systematic errors in six degree-of-freedom motion error simultaneous measurement system.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin

    2015-04-10

    The straightness measurement systematic errors induced by error crosstalk, fabrication and installation deviation of optical element, measurement sensitivity variation, and the Abbe error in six degree-of-freedom simultaneous measurement system are analyzed in detail in this paper. Models for compensating these systematic errors were established and verified through a series of comparison experiments with the Automated Precision Inc. (API) 5D measurement system, and the experimental results showed that the maximum deviation in straightness error measurement could be reduced from 6.4 to 0.9 μm in the x-direction, and 8.8 to 0.8 μm in the y-direction, after the compensation.

  18. Correlated measurement error hampers association network inference

    NARCIS (Netherlands)

    Kaduk, M.; Hoefsloot, H.C.J.; Vis, D.J.; Reijmers, T.; Greef, J. van der; Smilde, A.K.; Hendriks, M.M.W.B.

    2014-01-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the

  19. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  20. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  1. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  2. Određivanje daljine cilja pomoću video senzora i analiza uticaja grešaka i šuma merenja / Target range evaluation using video sensor and analysis of the influence of measurement noise and errors

    Directory of Open Access Journals (Sweden)

    Dragoslav Ugarak

    2006-01-01

    Full Text Available U radu je opisan matematički model određivanja daljine cilja obradom video snimaka u toku praćenja. Analizirani su doprinosi parametara koji utiču na veličinu grešaka i određene su vrednosti standardnog odstupanja. / This paper presents mathematical model of determining target range by analyzing video frame during the tracking. The contribution of effective parameters to accuracy are analyzed and values of standard deviation are determined.

  3. Correction of a phase dependent error in a time-of-flight range sensor

    Science.gov (United States)

    Seiter, Johannes; Hofbauer, Michael; Davidovic, Milos; Zimmermann, Horst

    2013-04-01

    Time-of-Flight (TOF) 3D cameras determine the distance information by means of a propagation delay measurement. The delay value is acquired by correlating the sent and received continuous wave signals in discrete phase delay steps. To reduce the measurement time as well as the resources required for signal processing, the number of phase steps can be decreased. However, such a change results in the arising of a crucial systematic distance dependent distance error. In the present publication we investigate this phase dependent error systematically by means of a fiber based measurement setup. Furthermore, the phase shift is varied with an electrical delay line device rather than by moving an object in front of the camera. This procedure allows investigating the above mentioned phase dependent error isolated from other error sources, as, e.g., the amplitude dependent error. In other publications this error is corrected by means of a look-up table stored in a memory device. In our paper we demonstrate an analytical correction method that dramatically minimizes the demanded memory size. For four phase steps, this approach reduces the error dramatically by 89.4 % to 13.5 mm at a modulation frequency of 12.5 MHz. For 20.0 MHz, a reduction of 86.8 % to 11.5 mm could be achieved.

  4. Measuring worst-case errors in a robot workcell

    Energy Technology Data Exchange (ETDEWEB)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  5. Measurement error caused by spatial misalignment in environmental epidemiology.

    Science.gov (United States)

    Gryparis, Alexandros; Paciorek, Christopher J; Zeka, Ariana; Schwartz, Joel; Coull, Brent A

    2009-04-01

    In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area.

  6. On Characterization of Elasticity Parameters in Context of Measurement Errors

    Science.gov (United States)

    Slawinski, M. A.

    2007-12-01

    them and are characterized by ranges of parameters. Such an approach takes advantage of similarities among several eigenproperties that distinguish a given group from the others. Furthermore, we might not require to measure the traveltime of the three waves --- the quasishear wave being more difficult to observe. Also, we might not require to measure polarizations, which, in general, exhibit a larger measurement error than do the traveltimes. (To obtain a complete elasticity tensor we need both polarizations and traveltimes for the three waves [3].) 1. Bóna, A., Bucataru, I., Slawinski, M.A. (2007) Elasticity parameters from traveltime and polarization measurements. Journal of Applied Geophysics (accepted) 2. Bóna, A., Bucataru, I., Slawinski, M.A. (2007) Coordinate-free characterization of elasticity tensor. Journal of Elasticity 87(2-3), 109--132 3. Bóna, A., Bucataru, I., Slawinski, M.A. (2007) Material symmetries versus wavefront symmetries. Q. Jl Mech. appl. Math 60(2), 73--8 4. Bóna, A., Slawinski, M.A. (2007) Comparison of two inversions for elasticity tensor. Journal of Applied Geophysics (submitted) 5. Dewangan, P., Grechka, V. (2003) Inversion of multicomponent, multiazimuth, walkaway VSP data for the stiffness tensor. Geophysics 68(3), 1022--1031 6. Ting, T.C.T. (2003) Generalized Cowin-Mehrabadi theorems and a direct proof that the number of linear elastic symmetries is eight. Internat. J. of Solids and Structures 40, 7129--7142

  7. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Directory of Open Access Journals (Sweden)

    David Ayllón

    Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  8. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  9. Estimation of slope for measurement error model with equation error: Applications on serum kanamycin data

    Science.gov (United States)

    Saqr, Anwar; Khan, Shahjahan

    2017-05-01

    This paper introduces a statistical method to estimate the parameters of bivariate structural errors-in-variables model (EIV). It is a complex problem when there is no or uncertain prior knowledge of the measurement errors variances. The proposed estimators of the parameters of EIV model are derived based on mathematical modification method for observed data. This method is suggested to reproduce an explanatory variable that has equivalent statistical characteristics of the unobserved explanatory variable, and to correct for the effects of measurement error in predictors. The proposed method produce robust estimators, and it is straightforward, easy to implement, and takes into account the equation errors. The simulation studies show that the new estimator to be generally more efficient and less biased than some other previous approaches. Compared to the maximum likelihood method via the simulation studies, the estimators of the proposed method are nearly asymptotically unbiased and efficient when there is no or uncertain prior knowledge of the measurement errors variances. The numerical comparisons of the simulation studies results are included. In addition, results are illustrated with applications on one well-known real data sets of serum kanamycin.

  10. Suspected time errors along the satellite laser ranging network and impact on the reference frame

    Science.gov (United States)

    Belli, Alexandre; Exertier, Pierre; Lemoine, Frank; Zelensky, Nikita

    2017-04-01

    Systematic errors in the laser ranging technologies must be considered when considering the GGOS objective to maintain a network with an accuracy of 1 mm and a stability of 0.1 mm per year for the station ground coordinates in the ITRF. Range and Time biases are identified to be part of these systematic errors, for a major part, and are difficult to detect. Concerning the range bias, analysts and working groups estimate their values from LAGEOS-1 & 2 observations (c.f. Appleby et al. 2016). On the other hand, time errors are often neglected (they are presumed to be USO) frequency model, in order to take care of the frequency instabilities caused by the space environment. The integration provides a model which becomes an "on-orbit" time realization which can be connected to each of the SLR stations by the ground to space laser link. We estimated time biases per station, with a repeatability of 3 - 4 ns, for 25 stations which observe T2L2 regularly. We investigated the effect on LAGEOS and Starlette orbits and we discuss the impact of time errors on the station coordinates. We show that the effects on the global POD are negligible (< 1 mm) but are at the level of 4 - 6 mm for the coordinates. We conclude and propose to introduce time errors in the future analyses (IDS and ILRS) that would lead to the computation of improved reference frame solutions.

  11. The analysis and measurement of motion errors of the linear slide in fast tool servo diamond turning machine

    Directory of Open Access Journals (Sweden)

    Xu Zhang

    2015-03-01

    Full Text Available This article proposes a novel method for identifying the motion errors (mainly straightness error and angular error of a linear slide, which is based on the laser interferometry technique integrated with the shifting method. First, the straightness error of a linear slide incorporated with angular error (pitch error in the vertical direction and yaw error in the horizontal direction is schematically explained. Then, a laser interferometry–based system is constructed to measure the motion errors of a linear slide, and an algorithm of error separation technique for extracting the straightness error, angular error, and tilt angle error caused by the motion of the reflector is developed. In the proposed method, the reflector is mounted on the slide moving along the guideway. The light-phase variation of two interfering laser beams can identify the lateral translation error of the slide. The differential outputs sampled with shifting initial point at the same datum line are applied to evaluate the angular error of the slide. Furthermore, the yaw error of the slide is measured by a laser interferometer in laboratory environment and compared with the evaluated values. Experimental results demonstrate that the proposed method possesses the advantages of reducing the effects caused by the assembly error and the tilt angle errors caused by movement of the reflector, adapting to long- or short-range measurement, and operating the measurement experiment conveniently and easily.

  12. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  13. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements.

    Science.gov (United States)

    Sedlak, Steffen M; Bruetzel, Linda K; Lipfert, Jan

    2017-04-01

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.

  14. Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data

    Directory of Open Access Journals (Sweden)

    Ariel Linden

    2015-01-01

    Full Text Available The patient activation measure (PAM is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve, using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum and 44.4% (maximum. Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.

  15. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  16. Measurement error of waist circumference: Gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To

  17. Measurement error of waist circumference: gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; van Mechelen, W.

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To

  18. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  19. Measurement errors with low-cost citizen science radiometers

    OpenAIRE

    Bardají, R.; Piera Fernández, Jaume

    2016-01-01

    The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed

  20. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy

    OpenAIRE

    David Ayllón; Roberto Gil-Pita; Fernando Seoane

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measur...

  1. Active stabilization of error field penetration via control field and bifurcation of its stable frequency range

    Science.gov (United States)

    Inoue, S.; Shiraishi, J.; Takechi, M.; Matsunaga, G.; Isayama, A.; Hayashi, N.; Ide, S.

    2017-11-01

    An active stabilization effect of a rotating control field against an error field penetration is numerically studied. We have developed a resistive magnetohydrodynamic code ‘AEOLUS-IT’, which can simulate plasma responses to rotating/static external magnetic field. Adopting non-uniform flux coordinates system, the AEOLUS-IT simulation can employ high magnetic Reynolds number condition relevant to present tokamaks. By AEOLUS-IT, we successfully clarified the stabilization mechanism of the control field against the error field penetration. Physical processes of a plasma rotation drive via the control field are demonstrated by the nonlinear simulation, which reveals that the rotation amplitude at a resonant surface is not a monotonic function of the control field frequency, but has an extremum. Consequently, two ‘bifurcated’ frequency ranges of the control field are found for the stabilization of the error field penetration.

  2. Measurement error models for survey statistics and economic archaeology

    OpenAIRE

    Groß, Marcus

    2016-01-01

    The present work is concerned with so-called measurement error models in applied statistics. The data were analyzed and processed from two very different fields. On the one hand survey and register data, which are used in the Survey statistics and on the other hand anthropological data on prehistoric skeletons. For both fields the problem arises that some variables cannot be measured with sufficient accuracy. This can be due to privacy or measuring inaccuracies. This circumstance can be summa...

  3. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  4. Reliability and measurement error of 3-dimensional regional lumbar motion measures

    DEFF Research Database (Denmark)

    Mieritz, Rune M; Bronfort, Gert; Kawchuk, Greg

    2012-01-01

    The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems.......The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems....

  5. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  6. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  7. Consistent estimation of linear panel data models with measurement error

    NARCIS (Netherlands)

    Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas

    2017-01-01

    Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the

  8. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  9. Comparing measurement errors for formants in synthetic and natural vowels.

    Science.gov (United States)

    Shadle, Christine H; Nam, Hosung; Whalen, D H

    2016-02-01

    The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry.

  10. Measurement error of global rainbow technique: The effect of recording parameters

    Science.gov (United States)

    Wu, Xue-cheng; Li, Can; Jiang, Hao-yu; Cao, Jian-zheng; Chen, Ling-hong; Gréhan, Gerard; Cen, Ke-fa

    2017-11-01

    Rainbow refractometry can measure refractive index and size of spray droplets simultaneously. Recording parameters of global rainbow imaging system, such as recording distance and scattering angle recording range, play a vital role in in-situ high accuracy measurement. In the paper, a theoretical and experimental investigation on the effect of recording parameters on measurement error of global rainbow technique was carried out for the first time. The relation of the two recording parameters, and the monochromatic aberrations in global rainbow imaging system were analyzed. In the framework of Lorenz-Mie theory and modified Nussenzveig theory with correction coefficients, measurement error curves of refractive index and size of the droplets caused by aberrations for different recording parameters were simulated. The simulated results showed that measurement error increased with RMS radius of diffuse spot; a long recording distance and a large scattering angle recording range both caused a larger diffuse spot; recording parameters were indicated to have a great effect on refractive index measurement error, but have little effect on measurement of droplet size. A sharp rise in spot radius at large recording parameters was mainly due to spherical aberration and coma. To confirm some of the conclusions, an experiment was conducted. The experimental results showed that the refractive index measurement error was as high as 1 . 3 × 10-3 for a recording distance of 31 cm. In the case, recording parameters are suggested to be set to as small a value as possible under the same optical elements.

  11. Laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters.

    Science.gov (United States)

    Chen, Benyong; Xu, Bin; Yan, Liping; Zhang, Enzheng; Liu, Yanna

    2015-04-06

    A laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters is proposed. The optical configuration of the proposed system is designed and the mathematic model for simultaneously measuring six degrees of freedom parameters of the measured object including three rotational parameters of the yaw, pitch and roll errors and three linear parameters of the horizontal straightness error, vertical straightness error and straightness error's position is established. To address the influence of the rotational errors produced by the measuring reflector in laser straightness interferometer, the compensation method of the straightness error and its position is presented. An experimental setup was constructed and a series of experiments including separate comparison measurement of every parameter, compensation of straightness error and its position and simultaneous measurement of six degrees of freedom parameters of a precision linear stage were performed to demonstrate the feasibility of the proposed system. Experimental results show that the measurement data of the multiple degrees of freedom parameters obtained from the proposed system are in accordance with those obtained from the compared instruments and the presented compensation method can achieve good effect in eliminating the influence of rotational errors on the measurement of straightness error and its position.

  12. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  13. Reliability of three measures of ankle dorsiflexion range of motion.

    Science.gov (United States)

    Konor, Megan M; Morton, Sam; Eckerson, Joan M; Grindstaff, Terry L

    2012-06-01

    A variety of methods exist to measure ankle dorsiflexion range of motion (ROM). Few studies have examined the reliability of a novice rater. The purpose of this study was to determine the reliability of ankle ROM measurements using three different techniques in a novice rater. Twenty healthy subjects (mean±SD, age=24±3 years, height=173.2±8.1 cm, mass=72.6±15.2 kg) participated in this study. Ankle dorsiflexion ROM measures were obtained in a weight-bearing lunge position using a standard goniometer, digital inclinometer, and a tape measure using the distance-to-wall technique. All measures were obtained three times per side, with 10 minutes of rest between the first and second set of measures. Intrarater reliability was determined using an intraclass correlation coefficient (ICC(2,3)) and associated 95% confidence intervals (CI). Standard error of measurement (SEM) and the minimal detectable change (MDC) for each measurement technique were also calculated. The within-session intrarater reliability (ICC(2,3)) estimates for each measure are as follows: tape measure (right 0.98, left 0.99), digital inclinometer (right 0.96; left 0.97), and goniometer (right 0.85; left 0.96). The SEM for the tape measure method ranged from 0.4-0.6 cm and the MDC was between 1.1-1.5 cm. The SEM for the inclinometer was between 1.3-1.4° and the MDC was 3.7-3.8°. The SEM for the goniometer ranged from 1.8-2.8° with an MDC of 5.0-7.7°. The results indicate that reliable measures of weight-bearing ankle dorsiflexion ROM can be obtained from a novice rater. All three techniques had good reliability and low measurement error, with the distance-to-wall technique using a tape measure and inclinometer methods resulting in higher reliability coefficients (ICC(2,3)=0.96 to 0.99) and a lower SEM compared to the goniometer (ICC(2,3)=0.85 to 0.96). 2b.

  14. Determining sexual dimorphism in frog measurement data: integration of statistical significance, measurement error, effect size and biological significance

    Directory of Open Access Journals (Sweden)

    Hayek Lee-Ann C.

    2005-01-01

    Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.

  15. Proton range verification in homogeneous materials through acoustic measurements

    Science.gov (United States)

    Nie, Wei; Jones, Kevin C.; Petro, Scott; Kassaee, Alireza; Sehgal, Chandra M.; Avery, Stephen

    2018-01-01

    Clinical proton beam quality assurance (QA) requires a simple and accurate method to measure the proton beam Bragg peak (BP) depth. Protoacoustics, the measurement of the pressure waves emitted by thermal expansion resulting from proton dose deposition, may be used to obtain the depth of the BP in a phantom by measuring the time-of-flight of the pressure wave. Rectangular and cylindrical phantoms of different materials (aluminum, lead, and polyethylene) were used for protoacoustic studies. Four different methods for analyzing the protoacoustic signals are compared. Data analysis shows that, for Methods 1 and 2, plastic phantoms have better accuracy than metallic ones because of the lower speed of sound. Method 3 does not require characterizing the speed of sound in the material, but it results in the largest error. Method 4 exhibits minimal error, less than 3 mm (with an uncertainty  ⩽1.5 mm) for all the materials and geometries. Psuedospectral wave-equation simulations (k-Wave MATLAB toolbox) are used to understand the origin of acoustic reflections within the phantom. The presented simulations and experiments show that protoacoustic measurements may provide a low cost and simple QA procedure for proton beam range verification as long as the proper phantoms and calculation methods are used.

  16. Error in total ozone measurements arising from aerosol attenuation

    Science.gov (United States)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  17. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation......For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  18. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

    Energy Technology Data Exchange (ETDEWEB)

    PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

    1999-03-29

    All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

  19. Entanglement-enhanced lidars for simultaneous range and velocity measurements

    Science.gov (United States)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-10-01

    Lidar is a well-known optical technology for measuring a target's range and radial velocity. We describe two lidar systems that use entanglement between transmitted signals and retained idlers to obtain significant quantum enhancements in simultaneous measurements of these parameters. The first entanglement-enhanced lidar circumvents the Arthurs-Kelly uncertainty relation for simultaneous measurements of range and radial velocity from the detection of a single photon returned from the target. This performance presumes there is no extraneous (background) light, but is robust to the round-trip loss incurred by the signal photons. The second entanglement-enhanced lidar—which requires a lossless, noiseless environment—realizes Heisenberg-limited accuracies for both its range and radial-velocity measurements, i.e., their root-mean-square estimation errors are both proportional to 1 /M when M signal photons are transmitted. These two lidars derive their entanglement-based enhancements from the use of a unitary transformation that takes a signal-idler photon pair with frequencies ωS and ωI and converts it to a signal-idler photon pair whose frequencies are (ωS+ωI)/2 and (ωS-ωI)/2 . Insight into how this transformation provides its benefits is provided through an analogy to continuous-variable superdense coding.

  20. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Science.gov (United States)

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm.h-1 to 250 mm.h-1) and three di...

  1. Measurement error in CT assessment of appendix diameter

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2016-12-15

    Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)

  2. Error reduction techniques for measuring long synchrotron mirrors

    Energy Technology Data Exchange (ETDEWEB)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.

  3. Measurement error analysis for polarization extinction ratio of multifunctional integrated optic chips.

    Science.gov (United States)

    Zhang, Haoliang; Yang, Jun; Li, Chuang; Yu, Zhangjun; Yang, Zhe; Yuan, Yonggui; Peng, Feng; Li, Hanyang; Hou, Changbo; Zhang, Jianzhong; Yuan, Libo; Xu, Jianming; Zhang, Chao; Yu, Quanfu

    2017-08-20

    Measurement error for the polarization extinction ratio (PER) of a multifunctional integrated optic chip (MFIOC) utilizing white light interferometry was analyzed. Three influence factors derived from the all-fiber device (or optical circuit) under test were demonstrated to be the main error sources, including: 1) the axis-alignment angle (AA) of the connection point between the extended polarization-maintaining fiber (PMF) and the chip PMF pigtail; 2) the oriented angle (OA) of the linear polarizer; and 3) the birefringence dispersion of PMF and the MFIOC chip. Theoretical calculations and experimental results indicated that by controlling the AA range within 0°±5°, the OA range within 45°±2° and combining with dispersion compensation process, the maximal PER measurement error can be limited to under 1.4 dB, with the 3σ uncertainty of 0.3 dB. The variations of birefringence dispersion effect versus PMF length were also discussed to further confirm the validity of dispersion compensation. A MFIOC with the PER of ∼50  dB was experimentally tested, and the total measurement error was calculated to be ∼0.7  dB, which proved the effectiveness of the proposed error reduction methods. We believe that these methods are able to facilitate high-accuracy PER measurement.

  4. Measurements of stem diameter: implications for individual- and stand-level errors.

    Science.gov (United States)

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when

  5. Functional multiple indicators, multiple causes measurement error models.

    Science.gov (United States)

    Tekwe, Carmen D; Zoh, Roger S; Bazer, Fuller W; Wu, Guoyao; Carroll, Raymond J

    2017-05-08

    Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body. Therefore, metabolic rate can be approximated by heat production plus some errors. While energy expenditure and metabolic rates are correlated, they are not equivalent. Energy expenditure results from physical function, while metabolism can occur within the body without the occurrence of physical activities. In this manuscript, we present a novel approach for studying the relationship between metabolic rate and indicators of energy expenditure. We do so by extending our previous work on MIMIC ME models to allow responses that are sparsely observed functional data, defining the sparse functional multiple indicators, multiple cause measurement error (FMIMIC ME) models. The mean curves in our proposed methodology are modeled using basis splines. A novel approach for estimating the variance of the classical measurement error based on functional principal components is presented. The model parameters are estimated using the EM algorithm and a discussion of the model's identifiability is provided. We show that the defined model is not a trivial extension of longitudinal or functional data methods, due to the presence of the latent construct. Results from its application to data collected on Zucker diabetic fatty rats are provided. Simulation results investigating the properties of our approach are also presented. © 2017, The International Biometric Society.

  6. Ultrasonic range measurements on the human body

    NARCIS (Netherlands)

    Weenk, D.; van Beijnum, Bernhard J.F.; Droog, Adriaan; Hermens, Hermanus J.; Veltink, Petrus H.

    2013-01-01

    Ambulatory range estimation on the human body is important for the assessment of the performance of upper- and lower limb tasks outside a laboratory. In this paper an ultrasound sensor for estimating ranges on the human body is presented and validated during gait. The distance between the feet is

  7. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Shi Qiang Liu

    2016-01-01

    Full Text Available Errors compensation of micromachined-inertial-measurement-units (MIMU is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3 possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively.

  8. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    Science.gov (United States)

    Liu, Shi Qiang; Zhu, Rong

    2016-01-29

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.

  9. Analysis of Measured Workpiece's Form Errors Influence on the Accuracy of Probe Heads Used on Five-Axis Measuring Systems

    Directory of Open Access Journals (Sweden)

    Wiktor Harmatys

    2017-12-01

    Full Text Available The five-axis measuring systems are one of the most modern inventions in coordinate measuring technique. They are capable of performing measurements using only the rotary pairs present in their kinematic structure. This possibility is very useful because it may cause significant reduction of total measurement time and costs. However, it was noted that high values of measured workpiece's form errors may cause significant reduction of five-axis measuring system accuracy. The investigation on the relation between these two parameters was conducted in this paper and possible reasons of decrease in measurement accuracy was discussed in example of measurements of workpieces with form errors ranging from 0,5 to 1,7 millimetre.

  10. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.

    Science.gov (United States)

    Song, Xiao; Wang, Ching-Yun

    2014-12-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.

  11. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  12. Longitudinal changes in cardiorespiratory fitness: measurement error or true change?

    Science.gov (United States)

    Jackson, Andrew S; Kampert, James B; Barlow, Carolyn E; Morrow, James R; Church, Timothy S; Blair, Steven N

    2004-07-01

    This study examined the thesis that the reported Aerobics Center Longitudinal Study (ACLS) mortality reductions associated with improved cardiorespiratory fitness were because of measurement error of serial treadmill tests. We tested the research hypothesis that longitudinal changes in cardiorespiratory fitness of the ACLS cohort were a multivariate function of changes in self-report physical activity (SR-PA), resting heart rate, and body mass index (BMI). We used the results of three serial maximal treadmill tests (T1, T2, and T3) to evaluate the serial changes in cardiorespiratory fitness of 4675 men. The mean duration between the three serial tests examined was: T2 - T1, 1.9 yr; T3 - T2, 6.1 yr; and T3 - T1, 8.0 yr. Maximum and resting heart rate, BMI, SR-PA, and maximum Balke treadmill duration were measured on each occasion. General linear models analysis showed that with change in maximum heart rate statistically controlled change in treadmill time performance was a function of independent changes in SR-PA, BMI, and R-HR. These variables accounted for significant (P heart rate gained the most fitness between serial tests. These results support the research hypothesis tested. Variations in serial ACLS treadmill tests are not just due to measurement error alone, but also to systematic variation linked with changes in lifestyle.

  13. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  14. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    Science.gov (United States)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  15. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  16. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  17. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    Science.gov (United States)

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-02

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  18. Effects of cosine error in irradiance measurements from field ocean color radiometers.

    Science.gov (United States)

    Zibordi, Giuseppe; Bulgarelli, Barbara

    2007-08-01

    The cosine error of in situ seven-channel radiometers designed to measure the in-air downward irradiance for ocean color applications was investigated in the 412-683 nm spectral range with a sample of three instruments. The interchannel variability of cosine errors showed values generally lower than +/-3% below 50 degrees incidence angle with extreme values of approximately 4-20% (absolute) at 50-80 degrees for the channels at 412 and 443 nm. The intrachannel variability, estimated from the standard deviation of the cosine errors of different sensors for each center wavelength, displayed values generally lower than 2% for incidence angles up to 50 degrees and occasionally increasing up to 6% at 80 degrees. Simulations of total downward irradiance measurements, accounting for average angular responses of the investigated radiometers, were made with an accurate radiative transfer code. The estimated errors showed a significant dependence on wavelength, sun zenith, and aerosol optical thickness. For a clear sky maritime atmosphere, these errors displayed values spectrally varying and generally within +/-3%, with extreme values of approximately 4-10% (absolute) at 40-80 degrees sun zenith for the channels at 412 and 443 nm. Schemes for minimizing the cosine errors have also been proposed and discussed.

  19. Long-range temporal correlations in resting-state α oscillations predict human timing-error dynamics

    NARCIS (Netherlands)

    Smit, D.J.A.; Linkenkaer-Hansen, K.; de Geus, E.J.C.

    2013-01-01

    Human behavior is imperfect. This is notably clear during repetitive tasks in which sequences of errors or deviations from perfect performance result. These errors are not random, but show patterned fluctuations with long-range temporal correlations that are well described using power-law spectra

  20. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  1. Bayesian adjustment for covariate measurement errors: a flexible parametric approach.

    Science.gov (United States)

    Hossain, Shahadut; Gustafson, Paul

    2009-05-15

    In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.

  2. Measurement error as a source of QT dispersion: a computerised analysis

    NARCIS (Netherlands)

    J.A. Kors (Jan); G. van Herpen (Gerard)

    1998-01-01

    textabstractOBJECTIVE: To establish a general method to estimate the measuring error in QT dispersion (QTD) determination, and to assess this error using a computer program for automated measurement of QTD. SUBJECTS: Measurements were done on 1220 standard simultaneous

  3. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    Science.gov (United States)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  4. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan (LMU)

    2017-03-29

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.

  5. CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*

    Science.gov (United States)

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218

  6. Fusing range measurements from ultrasonic beacons and a laser range finder for localization of a mobile robot.

    Science.gov (United States)

    Ko, Nak Yong; Kuc, Tae-Yong

    2015-05-11

    This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization.

  7. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  8. Assessment of Measurement Error when Using the Laser Spectrum Analyzers

    Directory of Open Access Journals (Sweden)

    A. A. Titov

    2015-01-01

    Full Text Available The article dwells on assessment of measurement errors when using the laser spectrum analyzers. It presents the analysis results to show that it is possible to carry out a spectral analysis of both amplitudes and phases of frequency components of signals and to analyze a changing phase of frequency components of radio signals using interferential methods of measurements. It is found that the interferometers with Mach-Zehnder arrangement are most widely used for measurement of signal phase. A possibility to increase resolution when using the combined method as compared to the other considered methods is shown since with its application spatial integration is performed over one coordinate while time integration is done over the other coordinate that is reached by the orthogonal arrangement of modulators relative each other. The article defines a drawback of this method. It is complicatedness and low-speed because of integrator that disables measurement of spectral components of a radio pulse if its width is less than a temporary aperture. There is a proposal to create an advanced option of the spectrum analyzer in which phase is determined through the signal processing. The article presents resolution when using such a spectrum analyzer. It also reviews the possible options for creating devices to measure the phase components of a spectrum depending on the methods applied to measure a phase. The analysis has shown that for phase measurement a time-pulse method is the most perspective. It is found that the known circuits of digital phase-meters using this method cannot be directly used in spectrum analyzers as they are designed for measurement of the phase only of one signal frequency. In this regard a number of circuits were developed to measure the amplitude and phase of frequency components of the radio signal. It is shown that the perspective option of creating a spectrum analyzer is device in which the phase is determined through the signal

  9. Wind shear proportional errors in the horizontal wind speed sensed by focused, range gated lidars

    DEFF Research Database (Denmark)

    Lindelöw, Per Jonas Petter; Courtney, Michael; Parmentier, R.

    2008-01-01

    The 10-minute average horizontal wind speeds sensed with lidar and mast mounted cup anemometers, at 60 to 116 meters altitude at Hovsore, are compared. The lidar deviation from the cup value as a function of wind velocity and wind shear is studied in a 2-parametric regression analysis which reveals...... an altitude dependent relation between the lidar error and the wind shear. A likely explanation for this relation is an error in the intended sensing altitude. At most this error is estimated to 9 in which induced errors in the horizontal wind velocity of up to 0.5 m/s as compared to a cup at the intended...... for wind velocity and wind shear dependent errors are discussed. The 2-parametric regression analysis described in this paper is proven to be a better approach when acceptance testing and calibrating lidars....

  10. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  11. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  12. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  13. Measuring and detecting molecular adaptation in codon usage against nonsense errors during protein translation.

    Science.gov (United States)

    Gilchrist, Michael A; Shah, Premal; Zaretzki, Russell

    2009-12-01

    Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy.

  14. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    Science.gov (United States)

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  15. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review.

    Science.gov (United States)

    van Kooij, Yara E; Fink, Alexandra; Nijhuis-van der Sanden, Maria W; Speksnijder, Caroline M

    Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Databases were searched for articles with key words "hand," "goniometry," "reliability," and derivatives of these terms. Assessment of the methodological quality was carried out using the Consensus-Based Standards for the Selection of Health Measurement Instruments checklist. Two independent reviewers performed a best evidence synthesis based on criteria proposed by Terwee et al (2007). Fifteen articles were included. One article was of fair methodological quality, and 14 articles were of poor methodological quality. An acceptable level for reliability (intraclass correlation coefficient > 0.70 or Pearson's correlation > 0.80) was reported in 1 study of fair methodological quality and in 8 articles of low methodological quality. Because the minimal important change was not calculated in the articles, there was an unknown level of evidence for the measurement error. Further research with adequate sample sizes should focus on reference outcomes for different patient groups. For valid therapy evaluation, it is important to know if the change in range of motion reflects a real change of the patient or if this is due to the measurement error of the goniometer. Until now, there is insufficient evidence to establish this cut-off point (the smallest detectable change). Following the Consensus-Based Standards for the Selection of Health Measurement Instruments criteria, there was limited level of evidence for an acceptable reliability in the dorsal measurement method and unknown level of evidence for the measurement error. 2a. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  16. Pivot and cluster strategy: a preventive measure against diagnostic errors.

    Science.gov (United States)

    Shimizu, Taro; Tokuda, Yasuharu

    2012-01-01

    Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS), encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1) and analytical process (System 2) in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2) of an initial diagnosis made by the physicians' intuitive process (System 1), thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.

  17. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    -prone explanatory variables and adjusts for the incidence of these errors giving rise to more adequate multilevel models. 2.0 Methodology. 2.1. Data Structure. The illustrative data employed was drawn from an educational environment. There.

  18. Low-error and broadband microwave frequency measurement in a silicon chip

    CERN Document Server

    Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David

    2015-01-01

    Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.

  19. Precision influence of a phase retrieval algorithm in fractional Fourier domains from position measurement error.

    Science.gov (United States)

    Guo, Cheng; Tan, Jiubin; Liu, Zhengjun

    2015-08-01

    An iterative structure of amplitude-phase retrieval (APR) was proved to obtain more accurate reconstructed data of both amplitude and phase. However, there is not enough analysis of the precise influence from position measurement error and corresponding error correction. We apply the APR in fractional Fourier domains to reconstruct a sample image and describe the corresponding optical implementation. The error model is built to discuss the distribution of the position measurement error. A corrective method is applied to amend the error and obtain a better quality of retrieved image. The numerical results have demonstrated that our methods are feasible and useful to correct the error for various circumstances.

  20. Comparing methods to measure error in gynecologic cytology and surgical pathology.

    Science.gov (United States)

    Renshaw, Andrew A

    2006-05-01

    Both gynecologic cytology and surgical pathology use similar methods to measure diagnostic error, but differences exist between how these methods have been applied in the 2 fields. To compare the application of methods of error detection in gynecologic cytology and surgical pathology. Review of the literature. There are several different approaches to measuring error, all of which have limitations. Measuring error using reproducibility as the gold standard is a common method to determine error. While error rates in gynecologic cytology are well characterized and methods for objectively assessing error in the legal setting have been developed, meaningful methods to measure error rates in clinical practice are not commonly used and little is known about the error rates in this setting. In contrast, in surgical pathology the error rates are not as well characterized and methods for assessing error in the legal setting are not as well defined, but methods to measure error in actual clinical practice have been characterized and preliminary data from these methods are now available concerning the error rates in this setting.

  1. Pivot and cluster strategy: a preventive measure against diagnostic errors

    Directory of Open Access Journals (Sweden)

    Shimizu T

    2012-11-01

    Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing

  2. High speed high dynamic range high accuracy measurement system

    Science.gov (United States)

    Deibele, Craig E.; Curry, Douglas E.; Dickson, Richard W.; Xie, Zaipeng

    2016-11-29

    A measuring system includes an input that emulates a bandpass filter with no signal reflections. A directional coupler connected to the input passes the filtered input to electrically isolated measuring circuits. Each of the measuring circuits includes an amplifier that amplifies the signal through logarithmic functions. The output of the measuring system is an accurate high dynamic range measurement.

  3. Identifying and Removing Systematic Error due to Resistance Tolerance from Measurement System of Inclinometer

    Directory of Open Access Journals (Sweden)

    POP Septimiu

    2012-05-01

    Full Text Available This paper is focused on the effect produced by systematic error of measurement devices in monitoring of a system, dam. The effect produced by systematic error in dam monitoring consist in a wrongdescription of dam evolution. Measurement errors lead in a deflection of the dam from the normal evolution. The physical parameter, inclination, needs to be measured with an accuracy of 0.05%. The sensor used is a full differential output voltage. In a measurementdevice an error source is the electronic components imperfections. The performance of measurement instruments depend on resistance tolerance. The error produced by tolerance on a measurement device is a systematic error and in monitoring process become a random error. The measure of transducer with Wheatstone-bridge supposes to use high accuracy resistance of 0.01%. But a high accuracy resistor increases the cost o instruments. The source of systematic error can be eliminated if the transducer is measured without resistance divider. To obtain positive voltage at sensor output this is power supply relative to common mode voltage of analog converter. In this casethe measurement error depends just by ADC. The acquisition is made with a differential converter. To obtain an accuracy of measurement of 0.05% is used a 14 bit converter. The ADC has auto calibration function so the offset and gain errors are internally compensated.

  4. CENTIMETER COSMO-SKYMED RANGE MEASUREMENTS FOR MONITORING GROUND DISPLACEMENTS

    Directory of Open Access Journals (Sweden)

    F. Fratarcangeli

    2016-06-01

    Full Text Available The SAR (Synthetic Aperture Radar imagery are widely used in order to monitor displacements impacting the Earth surface and infrastructures. The main remote sensing technique to extract sub-centimeter information from SAR imagery is the Differential SAR Interferometry (DInSAR, based on the phase information only. However, it is well known that DInSAR technique may suffer for lack of coherence among the considered stack of images. New Earth observation SAR satellite sensors, as COSMO-SkyMed, TerraSAR-X, and the coming PAZ, can acquire imagery with high amplitude resolutions too, up to few decimeters. Thanks to this feature, and to the on board dual frequency GPS receivers, allowing orbits determination with an accuracy at few centimetres level, the it was proven by different groups that TerraSAR-X imagery offer the capability to achieve, in a global reference frame, 3D positioning accuracies in the decimeter range and even better just exploiting the slant-range measurements coming from the amplitude information, provided proper corrections of all the involved geophysical phenomena are carefully applied. The core of this work is to test this methodology on COSMO-SkyMed data acquired over the Corvara area (Bolzano – Northern Italy, where, currently, a landslide with relevant yearly displacements, up to decimeters, is monitored, using GPS survey and DInSAR technique. The leading idea is to measure the distance between the satellite and a well identifiable natural or artificial Persistent Scatterer (PS, taking in account the signal propagation delays through the troposphere and ionosphere and filtering out the known geophysical effects that induce periodic and secular ground displacements. The preliminary results here presented and discussed indicate that COSMO-SkyMed Himage imagery appear able to guarantee a displacements monitoring with an accuracy of few centimetres using only the amplitude data, provided few (at least one stable PS’s are

  5. Study on error analysis and accuracy improvement for aspheric profile measurement

    Science.gov (United States)

    Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

    2017-06-01

    Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

  6. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    Science.gov (United States)

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  7. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    Science.gov (United States)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  8. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  9. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  10. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  11. Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement

    Science.gov (United States)

    Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui

    2017-01-01

    Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.

  12. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations......Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...

  13. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  14. Obesity increases precision errors in dual-energy X-ray absorptiometry measurements.

    Science.gov (United States)

    Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Fogelman, Ignac; Blake, Glen M

    2012-01-01

    The precision errors of dual-energy X-ray absorptiometry (DXA) measurements are important for monitoring osteoporosis. This study investigated the effect of body mass index (BMI) on precision errors for lumbar spine (LS), femoral neck (NOF), total hip (TH), and total body (TB) bone mineral density using the GE Lunar Prodigy. One hundred two women with BMIs ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the LS, left hip, and TB with repositioning between scans. Participants were divided into 3 groups based on their BMI and the percentage coefficient of variation (%CV) calculated for each group. The %CVs for the normal (obese (>30 kg/m(2)) (n=28) BMI groups, respectively, were LS BMD: 0.99%, 1.30%, and 1.68%; NOF BMD: 1.32%, 1.37%, and 2.00%; TH BMD: 0.85%, 0.88%, and 1.06%; TB BMD: 0.66%, 0.73%, and 0.91%. Statistically significant differences in precision error between the normal and obese groups were found for LS (p=0.0006), NOF (p=0.005), and TB BMD (p=0.025). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2012 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  15. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...... and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators...... of the returns, with important implications for the program evaluation literature....

  16. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  17. Pile volume measurement by range imaging camera in indoor environment

    Directory of Open Access Journals (Sweden)

    C. Altuntas

    2014-06-01

    Full Text Available Range imaging (RIM camera is recent technology in 3D location measurement. The new study areas have been emerged in measurement and data processing together with RIM camera. It has low-cost and fast measurement technique compared to the current measurement techniques. However its measurement accuracy varies according to effects resulting from the device and the environment. The direct sunlight is affect measurement accuracy of the camera. Thus, RIM camera should be used for indoor measurement. In this study gravel pile volume was measured by SwissRanger SR4000 camera. The measured volume is acquired as different 8.13% from the known.

  18. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    Science.gov (United States)

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has

  19. Human-Induced Effects on RSS Ranging Measurements for Cooperative Positioning

    OpenAIRE

    Francescantonio Della Rosa; Mauro Pelosi; Jari Nurmi

    2012-01-01

    We present experimental evaluations of human-induced perturbations on received-signal-strength-(RSS-) based ranging measurements for cooperative mobile positioning. To the best of our knowledge, this work is the first attempt to gain insight and understand the impact of both body loss and hand grip on the RSS for enhancing proximity measurements among neighbouring devices in cooperative scenarios. Our main contribution is represented by experimental investigations. Analysis of the errors intr...

  20. Acoustic absorption measurement of human hair and skin within the audible frequency range.

    Science.gov (United States)

    Katz, B F

    2000-11-01

    Utilizing the two-microphone impedance tube method, the acoustic absorption of human skin and hair is measured in the frequency range 1-6 kHz. Various locations on a number of human subjects are measured to determine if the presence of bone or an air pocket affects the acoustic absorption of human skin. The absorption coefficient of human hair is also measured. Additional techniques are utilized to minimize errors due to sample mounting methods. Techniques are employed to minimize potential errors in sensor and sample locations. The results of these measurements are compared to relevant historical papers on similar investigations. Results for skin measurements compare well with previous work. Measured hair absorption data do not agree with previous work in the area but do coincide with expected trends, which previous works do not.

  1. Measurement errors related to contact angle analysis of hydrogel and silicone hydrogel contact lenses.

    Science.gov (United States)

    Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole

    2009-11-01

    This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.

  2. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  3. Errors in GNSS radio occultation data: relevance of the measurement geometry and obliquity of profiles

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-02-01

    Full Text Available Atmospheric profiles retrieved from GNSS (Global Navigation Satellite System radio occultation (RO measurements are increasingly used to validate other measurement data. For this purpose it is important to be aware of the characteristics of RO measurements. RO data are frequently compared with vertical reference profiles, but the RO method does not provide vertical scans through the atmosphere. The average elevation angle of the tangent point trajectory (which would be 90° for a vertical scan is about 40° at altitudes above 70 km, decreasing to about 25° at 20 km and to less than 5° below 3 km. In an atmosphere with high horizontal variability we can thus expect noticeable representativeness errors if the retrieved profiles are compared with vertical reference profiles. We have performed an end-to-end simulation study using high-resolution analysis fields (T799L91 from the European Centre for Medium-Range Weather Forecasts (ECMWF to simulate a representative ensemble of RO profiles via high-precision 3-D ray tracing. Thereby we focused on the dependence of systematic and random errors on the measurement geometry, specifically on the incidence angle of the RO measurement rays with respect to the orbit plane of the receiving satellite, also termed azimuth angle, which determines the obliquity of RO profiles. We analyzed by how much errors are reduced if the reference profile is not taken vertical at the mean tangent point but along the retrieved tangent point trajectory (TPT of the RO profile. The exact TPT can only be determined by performing ray tracing, but our results confirm that the retrieved TPT – calculated from observed impact parameters – is a very good approximation to the "true" one. Systematic and random errors in RO data increase with increasing azimuth angle, less if the TPT is properly taken in to account, since the increasing obliquity of the RO profiles leads to an increasing sensitivity to departures from horizontal

  4. The misinterpretation of the standard error of measurement in medical education: a primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement.

    Science.gov (United States)

    McManus, I C

    2012-01-01

    In high-stakes assessments in medical education, such as final undergraduate examinations and postgraduate assessments, an attempt is frequently made to set confidence limits on the probable true score of a candidate. Typically, this is carried out using what is referred to as the standard error of measurement (SEM). However, it is often the case that the wrong formula is applied, there actually being three different formulae for use in different situations. To explain and clarify the calculation of the SEM, and differentiate three separate standard errors, which here are called the standard error of measurement (SEmeas), the standard error of estimation (SEest) and the standard error of prediction (SEpred). Most accounts describe the calculation of SEmeas. For most purposes, though, what is required is the standard error of estimation (SEest), which has to be applied not to a candidate's actual score but to their estimated true score after taking into account the regression to the mean that occurs due to the unreliability of an assessment. A third formula, the standard error of prediction (SEpred) is less commonly used in medical education, but is useful in situations such as counselling, where one needs to predict a future actual score on an examination from a previous actual score on the same examination. The various formulae can produce predictions that differ quite substantially, particularly when reliability is not particularly high, and the mark in question is far removed from the average performance of candidates. That can have important, unintended consequences, particularly in a medico-legal context.

  5. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    Directory of Open Access Journals (Sweden)

    Marta Torralba

    2016-01-01

    Full Text Available Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively.

  6. Discrete filtering techniques applied to sequential GPS range measurements

    Science.gov (United States)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  7. Analysis of liquid medication dose errors made by patients and caregivers using alternative measuring devices.

    Science.gov (United States)

    Ryu, Gyeong Suk; Lee, Yu Jeung

    2012-01-01

    Patients use several types of devices to measure liquid medication. Using a criterion ranging from a 10% to 40% variation from a target 5 mL for a teaspoon dose, previous studies have found that a considerable proportion of patients or caregivers make errors when dosing liquid medication with measuring devices. To determine the rate and magnitude of liquid medication dose errors that occur with patient/caregiver use of various measuring devices in a community pharmacy. Liquid medication measurements by patients or caregivers were observed in a convenience sample of community pharmacy patrons in Korea during a 2-week period in March 2011. Participants included all patients or caregivers (N = 300) who came to the pharmacy to buy over-the-counter liquid medication or to have a liquid medication prescription filled during the study period. The participants were instructed by an investigator who was also a pharmacist to select their preferred measuring devices from 6 alternatives (etched-calibration dosing cup, printed-calibration dosing cup, dosing spoon, syringe, dispensing bottle, or spoon with a bottle adapter) and measure a 5 mL dose of Coben (chlorpheniramine maleate/phenylephrine HCl, Daewoo Pharm. Co., Ltd) syrup using the device of their choice. The investigator used an ISOLAB graduated cylinder (Germany, blue grad, 10 mL) to measure the amount of syrup dispensed by the study participants. Participant characteristics were recorded including gender, age, education level, and relationship to the person for whom the medication was intended. Of the 300 participants, 257 (85.7%) were female; 286 (95.3%) had at least a high school education; and 282 (94.0%) were caregivers (parent or grandparent) for the patient. The mean (SD) measured dose was 4.949 (0.378) mL for the 300 participants. In analysis of variance of the 6 measuring devices, the greatest difference from the 5 mL target was a mean 5.552 mL for 17 subjects who used the regular (etched) dosing cup and 4

  8. Comparison of methods of measuring active cervical range of motion.

    Science.gov (United States)

    Whitcroft, Katherine L; Massouh, Laura; Amirfeyz, Rouin; Bannister, Gordon

    2010-09-01

    Experimental study. Cervical range of motion (CROM) was measured using different clinical methods. To compare the reliability and accuracy of visual estimation, tape measurement, and the universal goniometer (UG) with that of the CROM goniometer in measuring active CROM in healthy volunteers. The secondary objective was to identify the single neck movement that best represents overall range of motion. Neck movement is affected by pathology in the spine and shoulder. A reliable and accurate measurement of neck movement is required to quantify injury, recovery, and disability. Various methods of measuring neck movement have been described of which radiography remains the accepted reference standard. However, radiography is impractical for routine clinical assessment. Visual estimation, tape measurement, and the UG are convenient alternatives. To date, the accuracy and reliability of these methods have not been compared in healthy subjects, and the single neck movement that best reflects overall range has not yet been identified. Active cervical flexion, extension, right and left lateral flexion and rotation were measured in 100 healthy volunteers. Visual estimation, tape measurement between fixed landmarks, and the UG aligned on fixed and anatomic landmarks were compared with the CROM goniometer, which was used as the reference standard. Compared with the CROM goniometer, the UG aligned on fixed landmarks was the most accurate method, followed by the UG on anatomic landmarks. The reliability of the UG was between substantial and perfect. Visual estimation was reproducible but measured range of movement inaccurately. Tape measurement was inaccurate. Extension best reflected overall range. The UG aligned on a fixed landmark is most reliable method of measuring neck movement clinically. Where range must be quickly assessed, extension should be measured.

  9. Impact of Hydraulic Property Measurement Errors on Geostatistical Characterization and Stochastic Flow and Transport Modeling

    Science.gov (United States)

    Holt, R. M.

    2001-12-01

    It has long been recognized that the spatial variability of hydraulic properties in heterogeneous geologic materials directly controls the movement of contaminants in the subsurface. Heterogeneity is typically described using spatial statistics (mean, variance, and correlation length) determined from measured properties. These spatial statistics can be used in probabilistic (stochastic) flow and transport models. We ask the question, how do measurement errors affect our ability to accurately estimate spatial statistics and reliably apply stochastic models of flow and transport? Spatial statistics of hydraulic properties can be accurately estimated when measurement errors are unbiased. Unfortunately, measurements become spatially biased (i.e., their spatial pattern is systematically distorted) when random observation errors are propagated through non-linear inversion models or inversion models incorrectly describe experimental physics. This type of bias results in distortion of the distribution and variogram of the hydraulic property and errors in stochastic model predictions. We use a Monte Carlo approach to determine the spatial bias in field- and laboratory-estimated unsaturated hydraulic properties subject to simple measurement errors. For this analysis, we simulate measurements in a series of idealized realities and consider only simple measurement errors that can be easily modeled. We find that hydraulic properties are strongly biased by small observation and inversion-model errors. This bias can lead to order-of-magnitude errors in spatial statistics and artificial cross-correlation between measured properties. We also find that measurement errors amplify uncertainty in experimental variograms and can preclude identification of variogram-model parameters. The use of biased spatial statistics in stochastic flow and transport models can yield order-of-magnitude errors in critical transport results. The effects of observation and inversion-model errors are

  10. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  11. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  12. A Unified Approach to Measurement Error and Missing Data: Overview and Applications

    Science.gov (United States)

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…

  13. A Unified Approach to Measurement Error and Missing Data: Details and Extensions

    Science.gov (United States)

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…

  14. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  15. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  16. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  17. Inter-rater reliability and measurement error of sonographic muscle architecture assessments.

    Science.gov (United States)

    König, Niklas; Cassel, Michael; Intziegianni, Konstantina; Mayer, Frank

    2014-05-01

    Sonography of muscle architecture provides physicians and researchers with information about muscle function and muscle-related disorders. Inter-rater reliability is a crucial parameter in daily clinical routines. The aim of this study was to assess the inter-rater reliability of sonographic muscle architecture assessments and quantification of errors that arise from inconsistent probe positioning and image interpretation. The medial gastrocnemius muscle of 15 healthy participants was measured with sagittal B-mode ultrasound scans. The muscle thickness, fascicle length, superior pennation angle, and inferior pennation angle were assessed. The participants were examined by 2 investigators. A custom-made foam cast was used for standardized positioning of the probe. To analyze inter-rater reliability, the examinations of both raters were compared. The impact of probe positioning was assessed by comparison of foam cast and freehand scans. Error arising from picture interpretation was assessed by comparing the investigators' analyses of foam cast scans independently. Reliability was expressed as the intraclass correlation coefficient (ICC), inter-rater variability (IRV), Bland-Altman analysis (bias ± limits of agreement [LoA]), and standard error of measurement (SEM). Inter-rater reliability was good overall (ICC, 0.77-0.90; IRV, 9.0%-13.4%; bias ± LoA, 0.2 ± 0.2-1.7 ± 3.0). Superior and inferior pennation angles showed high systematic bias and LoA in all setups, ranging from 2.0° ± 2.2° to 3.4° ± 4.1°. The highest IRV was found for muscle thickness (13.4%). When the probe position was standardized, the SEM for muscle thickness decreased from 0.1 to 0.05 cm. Sonographic examination of muscle architecture of the medial gastrocnemius has good to high reliability. In contrast to pennation angle measurements, length measurements can be improved by standardization of the probe position.

  18. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  19. Sources of measurement error in laser Doppler vibrometers and proposal for unified specifications

    Science.gov (United States)

    Siegmund, Georg

    2008-06-01

    The focus of this paper is to disclose sources of measurement error in laser Doppler vibrometers (LDV) and to suggest specifications, suitable to describe their impact on measurement uncertainty. Measurement errors may be caused by both the optics and electronics sections of an LDV, caused by non-ideal measurement conditions or imperfect technical realisation. While the contribution of the optics part can be neglected in most cases, the subsequent signal processing chain may cause significant errors. Measurement error due to non-ideal behaviour of the interferometer has been observed mainly at very low vibration amplitudes and depending on the optical arrangement. The paper is organized as follows: Electronic signal processing blocks, beginning with the photo detector, are analyzed with respect to their contribution to measurement uncertainty. A set of specifications is suggested, adopting vocabulary and definitions known from traditional vibration measurement equipment. Finally a measurement setup is introduced, suitable for determination of most specifications utilizing standard electronic measurement equipment.

  20. The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment

    Directory of Open Access Journals (Sweden)

    C. Rödenbeck

    2006-01-01

    Full Text Available Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability. As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ, if systematic differences (and their changes are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.

  1. The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment

    Science.gov (United States)

    Rödenbeck, C.; Conway, T. J.; Langenfelds, R. L.

    2006-01-01

    Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability). As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ), if systematic differences (and their changes) are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.

  2. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  3. Detecting genotyping error using measures of degree of Hardy-Weinberg disequilibrium.

    Science.gov (United States)

    Attia, John; Thakkinstian, Ammarin; McElduff, Patrick; Milne, Elizabeth; Dawson, Somer; Scott, Rodney J; Klerk, Nicholas de; Armstrong, Bruce; Thompson, John

    2010-01-01

    Tests for Hardy-Weinberg equilibrium (HWE) have been used to detect genotyping error, but those tests have low power unless the sample size is very large. We assessed the performance of measures of departure from HWE as an alternative way of screening for genotyping error. Three measures of the degree of disequilibrium (alpha, ,D, and F) were tested for their ability to detect genotyping error of 5% or more using simulations and a real dataset of 184 children with leukemia genotyped at 28 single nucleotide polymorphisms. The simulations indicate that all three disequilibrium coefficients can usefully detect genotyping error as judged by the area under the Receiver Operator Characteristic (ROC) curve. Their discriminative ability increases as the error rate increases, and is greater if the genotyping error is in the direction of the minor allele. Optimal thresholds for detecting genotyping error vary for different allele frequencies and patterns of genotyping error but allele frequency-specific thresholds can be nominated. Applying these thresholds would have picked up about 90% of genotyping errors in our actual dataset. Measures of departure from HWE may be useful for detecting genotyping error, but this needs to be confirmed in other real datasets.

  4. Sharing is caring? Measurement error and the issues arising from combining 3D morphometric datasets.

    Science.gov (United States)

    Fruciano, Carmelo; Celik, Mélina A; Butler, Kaylene; Dooley, Tom; Weisbecker, Vera; Phillips, Matthew J

    2017-09-01

    Geometric morphometrics is routinely used in ecology and evolution and morphometric datasets are increasingly shared among researchers, allowing for more comprehensive studies and higher statistical power (as a consequence of increased sample size). However, sharing of morphometric data opens up the question of how much nonbiologically relevant variation (i.e., measurement error) is introduced in the resulting datasets and how this variation affects analyses. We perform a set of analyses based on an empirical 3D geometric morphometric dataset. In particular, we quantify the amount of error associated with combining data from multiple devices and digitized by multiple operators and test for the presence of bias. We also extend these analyses to a dataset obtained with a recently developed automated method, which does not require human-digitized landmarks. Further, we analyze how measurement error affects estimates of phylogenetic signal and how its effect compares with the effect of phylogenetic uncertainty. We show that measurement error can be substantial when combining surface models produced by different devices and even more among landmarks digitized by different operators. We also document the presence of small, but significant, amounts of nonrandom error (i.e., bias). Measurement error is heavily reduced by excluding landmarks that are difficult to digitize. The automated method we tested had low levels of error, if used in combination with a procedure for dimensionality reduction. Estimates of phylogenetic signal can be more affected by measurement error than by phylogenetic uncertainty. Our results generally highlight the importance of landmark choice and the usefulness of estimating measurement error. Further, measurement error may limit comparisons of estimates of phylogenetic signal across studies if these have been performed using different devices or by different operators. Finally, we also show how widely held assumptions do not always hold true

  5. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  6. Testing accuracy of long-range ultrasonic sensors for olive tree canopy measurements.

    Science.gov (United States)

    Gamarra-Diezma, Juan Luis; Miranda-Fuentes, Antonio; Llorens, Jordi; Cuenca, Andrés; Blanco-Roldán, Gregorio L; Rodríguez-Lizana, Antonio

    2015-01-28

    Ultrasonic sensors are often used to adjust spray volume by allowing the calculation of the crown volume of tree crops. The special conditions of the olive tree require the use of long-range sensors, which are less accurate and faster than the most commonly used sensors. The main objectives of the study were to determine the suitability of the sensor in terms of sound cone determination, angle errors, crosstalk errors and field measurements. Different laboratory tests were performed to check the suitability of a commercial long-range ultrasonic sensor, as were the experimental determination of the sound cone diameter at several distances for several target materials, the determination of the influence of the angle of incidence of the sound wave on the target and distance on the accuracy of measurements for several materials and the determination of the importance of the errors due to interference between sensors for different sensor spacings and distances for two different materials. Furthermore, sensor accuracy was tested under real field conditions. The results show that the studied sensor is appropriate for olive trees because the sound cone is narrower for an olive tree than for the other studied materials, the olive tree canopy does not have a large influence on the sensor accuracy with respect to distance and angle, the interference errors are insignificant for high sensor spacings and the sensor's field distance measurements were deemed sufficiently accurate.

  7. Long-Range Channel Measurements on Small Terminal Antennas Using Optics

    DEFF Research Database (Denmark)

    Yanakiev, Boyan; Nielsen, Jesper Ødum; Christensen, Morten

    2012-01-01

    In this paper, details are given on a novel measurement device for radio propagation-channel measurements. To avoid measurement errors due to the conductive cables on small terminal antennas, as well as to improve the handling of the prototypes under investigation, an optical measurement device has...... been developed. It utilizes thin, light, and flexible glass fibers as opposed to heavy, stiff, and conductive coaxial cables. This paper looks at the various system parameters such as overall gain, noise figure, and dynamic range and compares the solution to other methods. An estimate of the device...

  8. Signal restoration method for restraining the range walk error of Geiger-mode avalanche photodiode lidar in acquiring a merged three-dimensional image.

    Science.gov (United States)

    Xu, Lu; Zhang, Yu; Zhang, Yong; Wu, Long; Yang, Chenghua; Yang, Xu; Zhang, Zijing; Zhao, Yuan

    2017-04-10

    The fluctuation in the number of signal photoelectrons will cause a range walk error in a Geiger-mode avalanche photodiode (Gm-APD) lidar, which significantly depends on the target intensity. For a nanosecond-pulsed laser, the range walk error of traditional time-of-flight will cause deterioration. A new signal restoration method, based on the Poisson probability response model and the center-of-mass algorithm, is proposed to restrain the range walk error. We obtain a high-precision depth and intensity merged 3D image using this method. The range accuracy is 0.6 cm, and the intensity error is less than 3%.

  9. A quantum inspired model of radar range and range-rate measurements with applications to weak value measurements

    Science.gov (United States)

    Escalante, George

    2017-05-01

    Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.

  10. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  11. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  12. Internal errors of ground-based terrestrial earthshine measurements in 5 colour bands.

    Science.gov (United States)

    Thejll, Peter; Gleisner, Hans; Flynn, Chris

    2015-04-01

    Measurements of earthshine intensity could be an important complement to satellite-based observations of terrestrial visual and near-IR radiative budgets because they are independent and relatively inexpensive to obtain and also offer different potentials for long-term bias stability. Using ground-based photometric instruments, the Moon is imaged several times a night through a range of photometric filters, and the ratio of the intensities of the dark (Earth-lit) and bright (Sun-lit) sides is calculated - this ratio is proportional to terrestrial albedo. Using forward modelling of the expected ratio, given assumptions about reflectance, single-scattering albedo, and light-scattering processes it is possible to deduce the terrestrial albedo. In this poster we present multicolour photometric results from observations on 10 nights, obtained at the NOAA observatory on Mauna Loa, Hawaii, in 2011. The Moon had different phases on these nights and we discuss in detail the behaviour of internal errors as a function of phase. The internal error is dependent on the photon-statistics of the images obtained and its magnitude is investigated by use of bootstrapping with replacement of observations. Results indicate that standard Johnson B and V band equivalent Lambert albedos can be obtained with precisions (1 standard deviation) in the 0.1 to 1% range for phases between 40 and 90 degrees. For longer wavelengths, corresponding to broader bands on either side of the 'Vegetation edge' at 750nm, we see larger variability in the albedo determinations and discuss whether these are due to atmospheric conditions or represent fast, intrinsic terrestrial albedo variations. The accuracy of these results, however, appear to depend on method choices, in particular the choice of lunar reflectance model -- this 'external error' will be investigated in future analyses.

  13. Error Sources in the ETA Energy Analyzer Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Nexsen, W E

    2004-12-13

    At present the ETA beam energy as measured by the ETA energy analyzer and the DARHT spectrometer differ by {approx}12%. This discrepancy is due to two sources, an overestimate of the effective length of the ETA energy analyzer bending-field, and data reduction methods that are not valid. The discrepancy can be eliminated if we return to the original process of measuring the angular deflection of the beam and use a value of 43.2cm for the effective length of the axial field profile.

  14. Smartphone photography utilized to measure wrist range of motion.

    Science.gov (United States)

    Wagner, Eric R; Conti Mica, Megan; Shin, Alexander Y

    2018-02-01

    The purpose was to determine if smartphone photography is a reliable tool in measuring wrist movement. Smartphones were used to take digital photos of both wrists in 32 normal participants (64 wrists) at extremes of wrist motion. The smartphone measurements were compared with clinical goniometry measurements. There was a very high correlation between the clinical goniometry and smartphone measurements, as the concordance coefficients were high for radial deviation, ulnar deviation, wrist extension and wrist flexion. The Pearson coefficients also demonstrated the high precision of the smartphone measurements. The Bland-Altman plots demonstrated 29-31 of 32 smartphone measurements were within the 95% confidence interval of the clinical measurements for all positions of the wrists. There was high reliability between the photography taken by the volunteer and researcher, as well as high inter-observer reliability. Smartphone digital photography is a reliable and accurate tool for measuring wrist range of motion. II.

  15. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  16. From Measurements Errors to a New Strain Gauge Design

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...

  17. Comparing objective and subjective error measures for color constancy

    NARCIS (Netherlands)

    Lucassen, M.P.; Gijsenij, A.; Gevers, T.

    2008-01-01

    We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color

  18. Eddy-covariance flux errors due to biases in gas concentration measurements: origins, quantification and correction

    Science.gov (United States)

    Fratini, G.; McDermitt, D. K.; Papale, D.

    2013-08-01

    Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (licor.com/eddypro"target="_blank">www.licor.com/eddypro).

  19. The Effect of Maternal Drug Use on Birth Weight: Measurement Error in Binary Variables

    OpenAIRE

    Robert Kaestner; Theodore Joyce; Hassan Wehbeh

    1996-01-01

    This paper develops a method to correct for non-random measurement error in a binary indicator of illicit drugs. Our results suggest that estimates of the effect of self reported prenatal drug use on birth weight are biased upwards by measurement error -- a finding contrary to predictions of a model of random measurement error. We show that more accurate estimates of the true effect of drug use on birth weight can be obtained by using the predicted probability of falsely reporting drug use. T...

  20. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...

  1. Measurement error in income and schooling, and the bias for linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...

  2. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serio...... on the predictions, the approach seems to provide more accurate predictions than the naive approach. Predictions of water content of fish fillets from low-field NMR relaxations are used as examples to show the applicability of the methods. (C) 2004 Elsevier B.V. All rights reserved....

  3. Total Differential Errors in One-Port Network Analyzer Measurements with Application to Antenna Impedance

    Directory of Open Access Journals (Sweden)

    P. Zimourtopoulos

    2007-06-01

    Full Text Available The objective was to study uncertainty in antenna input impedance resulting from full one-port Vector Network Analyzer (VNA measurements. The VNA process equation in the reflection coefficient ρ of a load, its measurement m and three errors Es, determinable from three standard loads and their measurements, was considered. Differentials were selected to represent measurement inaccuracies and load uncertainties (Differential Errors. The differential operator was applied on the process equation and the total differential error dρ for any unknown load (Device Under Test DUT was expressed in terms of dEs and dm, without any simplification. Consequently, the differential error of input impedance Z -or any other physical quantity differentiably dependent on ρ- is expressible. Furthermore, to express precisely a comparison relation between complex differential errors, the geometric Differential Error Region and its Differential Error Intervals were defined. Practical results are presented for an indoor UHF ground-plane antenna in contrast with a common 50 Ω DC resistor inside an aluminum box. These two built, unshielded and shielded, DUTs were tested against frequency under different system configurations and measurement considerations. Intermediate results for Es and dEs characterize the measurement system itself. A number of calculations and illustrations demonstrate the application of the method.

  4. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2017-11-11

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  5. Measurement error of surface-mounted fiber Bragg grating temperature sensor.

    Science.gov (United States)

    Yi, Liu; Zude, Zhou; Erlong, Zhang; Jun, Zhang; Yuegang, Tan; Mingyao, Liu

    2014-06-01

    Fiber Bragg grating (FBG) sensors are extensively used to measure surface temperatures. However, the temperature gradient effect of a surface-mounted FBG sensor is often overlooked. A surface-type temperature standard setup was prepared in this study to investigate the measurement errors of FBG temperature sensors. Experimental results show that the measurement error of a bare fiber sensor has an obvious linear relationship with surface temperature, with the largest error achieved at 8.1 °C. Sensors packaged with heat conduction grease generate smaller measurement errors than do bare FBG sensors and commercial thermal resistors. Thus, high-quality packaged methods and proper modes of fixation can effectively improve the accuracy of FBG sensors in measuring surface temperatures.

  6. Intrinsic measurement errors for the speed of light in vacuum

    Science.gov (United States)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  7. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    by the antenna only within a certain region inside the visible range. Then, the truncation error is reduced by a Maxwellian continuation of the reliable portion of the spectrum: after back propagating the measured field to the antenna plane, a condition of spatial concentration of the primary field is exploited......A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...

  8. Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements

    Science.gov (United States)

    Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.

    2012-12-01

    This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.

  9. Regression calibration for classical exposure measurement error in environmental epidemiology studies using multiple local surrogate exposures.

    Science.gov (United States)

    Bateson, Thomas F; Wright, J Michael

    2010-08-01

    Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor.

  10. Growth of Errors and Uncertainties in Medium Range Ensemble Forecasts of U.S. East Coast Cool Season Extratropical Cyclones

    Science.gov (United States)

    Zheng, Minghua

    Cool-season extratropical cyclones near the U.S. East Coast often have significant impacts on the safety, health, environment and economy of this most densely populated region. Hence it is of vital importance to forecast these high-impact winter storm events as accurately as possible by numerical weather prediction (NWP), including in the medium-range. Ensemble forecasts are appealing to operational forecasters when forecasting such events because they can provide an envelope of likely solutions to serve user communities. However, it is generally accepted that ensemble outputs are not used efficiently in NWS operations mainly due to the lack of simple and quantitative tools to communicate forecast uncertainties and ensemble verification to assess model errors and biases. Ensemble sensitivity analysis (ESA), which employs a linear correlation and regression between a chosen forecast metric and the forecast state vector, can be used to analyze the forecast uncertainty development for both short- and medium-range forecasts. The application of ESA to a high-impact winter storm in December 2010 demonstrated that the sensitivity signals based on different forecast metrics are robust. In particular, the ESA based on the leading two EOF PCs can separate sensitive regions associated with cyclone amplitude and intensity uncertainties, respectively. The sensitivity signals were verified using the leave-one-out cross validation (LOOCV) method based on a multi-model ensemble from CMC, ECMWF, and NCEP. The climatology of ensemble sensitivities for the leading two EOF PCs based on 3-day and 6-day forecasts of historical cyclone cases was presented. It was found that the EOF1 pattern often represents the intensity variations while the EOF2 pattern represents the track variations along west-southwest and east-northeast direction. For PC1, the upper-level trough associated with the East Coast cyclone and its downstream ridge are important to the forecast uncertainty in cyclone

  11. Assessing discharge measurement errors at a gauging station in a small catchment (Vallcebre, Eastern Pyrenees)

    Science.gov (United States)

    Nord, G.; Martín-Vide, J. P.; Latron, J.; Soler, M.; Gallart, F.

    2009-04-01

    The Cal Rodó catchment (4.17km2) is located in a Mediterranean mountain area. Land cover is dominated by pastures and forest and badlands represent 2.8% of the surface of the catchment. Elevation ranges between 1100m and 1650m and average annual precipitation is about 900mm with heterogeneous distribution along the year. Autumn and spring are the seasons with more precipitation. Flash floods are relatively frequent, especially in autumn and are associated with high sediment transport. The period of observation ranges from 1994 to 2008. Discharge is measured in a gauging station controlled by a two levels rectangular notch weir with two different widths and contraction conditions that ensure a unique relationship between flow depth and discharge. The structure, designed to flush sediment, enables to capture a wide range of discharge. Flow depth is measured using a pressure sensor. Instantaneous discharge was lower than 0.1m3/s approximately 95% of the time and higher than 0.5 m3/s approximately 1% of the time. The largest runoff event measured produced instantaneous discharge of approximately 10m3/s. The second level of the gauging station was rarely reached since it was flooded in average 1.5 times per year but the corresponding events contributed to approximately 60% of the sediment transport. The structure is efficient as it was never submerged over the observed period and sediment deposition was negligible but it has a complex shape that makes difficult to relate accurately water depth to discharge, especially for large runoff events. In situ measurement of discharge by current meters or chemical dilution during high water stages is very unfeasible due to the flashiness of the response. Therefore, a hydraulic physical model (scale 1:11) was set up and calibrated to improve the stage-discharge curve and estimate the measurement errors of discharge. Sources of errors taken into account in this study are related to the precision and calibration of the pressure

  12. Small Device For Short-Range Antenna Measurements Using Optics

    DEFF Research Database (Denmark)

    Yanakiev, Boyan Radkov; Nielsen, Jesper Ødum; Christensen, Morten

    2011-01-01

    This paper gives a practical solution for implementing an antenna radiation pattern measurement device using optical fibers. It is suitable for anechoic chambers as well as short range channel sounding. The device is optimized for small size and provides a cheap and easy way to make optical antenna...

  13. Measuring the relativistic perigee advance with satellite laser ranging

    CERN Document Server

    Iorio, L; Pavlis, E C

    2002-01-01

    The pericentric advance of a test body by a central mass is one of the classical tests of general relativity. Today, this effect is measured with radar ranging by the perihelion shift of Mercury and other planets in the gravitational field of the Sun, with a relative accuracy of the order of 10 sup - sup 2 -10 sup - sup 3. In this paper, we explore the possibility of a measurement of the pericentric advance in the gravitational field of Earth by analysing the laser-ranged data of some orbiting, or proposed, laser-ranged geodetic satellites. Such a measurement of the perigee advance would place limits on hypothetical, very weak, Yukawa-type components of the gravitational interaction with a finite range of the order of 10 sup 4 km. Thus, we show that, at the present level of knowledge of the orbital perturbations, the relative accuracy, achievable with suitably combined orbital elements of LAGEOS and LAGEOS II, is of the order of 10 sup - sup 3. With the corresponding measured value of (2 + 2 gamma - beta)/3, ...

  14. Ion range measurements using fluorescent nuclear track detectors

    DEFF Research Database (Denmark)

    Klimpki, G.; Osinga, J.-M.; Herrmann, R.

    2013-01-01

    Fluorescent nuclear track detectors (FNTDs) show excellent detection properties for heavy charged particles and have, therefore, been investigated in this study in terms of their potential for in-vivo range measurements. We irradiated FNTDs with protons as well as with C, Mg, S, Fe and Xe ion beams...

  15. Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients

    DEFF Research Database (Denmark)

    Müller, Monika; Biurrun Manresa, José; Limacher, Andreas

    2017-01-01

    clinical setting. METHODS: We calculated intraclass correlation coefficients and performed a Bland-Altman analysis. RESULTS: Intraclass correlation coefficients were all clearly greater than 0.75, and Bland-Altman analysis showed minute systematic errors with small point estimates and narrow 95% confidence......BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...... measurement error and number of records. We determined the measurement error of a single versus the mean of 3 records of pressure pain detection threshold (PPDT), electrical pain detection threshold (EPDT), and nociceptive withdrawal reflex threshold (NWRT) in 429 chronic pain patients recruited in a routine...

  16. Measurement and Predition Errors in Body Composition Assessment and the Search for the Perfect Prediction Equation.

    Science.gov (United States)

    Katch, Frank I.; Katch, Victor L.

    1980-01-01

    Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)

  17. Resolution, measurement errors and uncertainties on deflectometric acquisition of large optical surfaces "DaOS"

    Science.gov (United States)

    Hofbauer, E.; Rascher, R.; Friedke, F.; Kometer, R.

    2017-06-01

    The basic physical measurement principle in DaOS is the vignettation of a quasi-parallel light beam emitted by an expanded light source in auto collimation arrangement. The beam is reflected by the surface under test, using invariant deflection by a moving and scanning pentaprism. Thereby nearly any curvature of the specimen is measurable. Resolution, systematic errors and random errors will be shown and explicitly discussed for the profile determination error. Measurements for a "plano-double-sombrero" device will be analyzed and reconstructed to find out the limit of resolution and errors of the reconstruction model and algorithms. These measurements are compared critically to reference results that are recorded by interferometry and Deflectometric Flatness Reference (DFR) method using a scanning penta device.

  18. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  19. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    Science.gov (United States)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  20. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  1. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  2. Absolute stress measurements at the rangely anticline, Northwestern Colorado

    Science.gov (United States)

    de la Cruz, R. V.; Raleigh, C.B.

    1972-01-01

    Five different methods of measuring absolute state of stress in rocks in situ were used at sites near Rangely, Colorado, and the results compared. For near-surface measurements, overcoring of the borehole-deformation gage is the most convenient and rapid means of obtaining reliable values for the magnitude and direction of the state of stress in rocks in situ. The magnitudes and directions of the principal stresses are compared to the geologic features of the different areas of measurement. The in situ stresses are consistent in orientation with the stress direction inferred from the earthquake focal-plane solutions and existing joint patterns but inconsistent with stress directions likely to have produced the Rangely anticline. ?? 1972.

  3. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  4. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  5. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  6. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error.

    Science.gov (United States)

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J

    2017-11-01

    Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.

  7. Human-Induced Effects on RSS Ranging Measurements for Cooperative Positioning

    DEFF Research Database (Denmark)

    Della Rosa, Francescantonio; Pelosi, Mauro; Nurmi, Jari

    2012-01-01

    We present experimental evaluations of human-induced perturbations on received-signal-strength-(RSS-) based ranging measurements for cooperative mobile positioning. To the best of our knowledge, this work is the first attempt to gain insight and understand the impact of both body loss and hand grip...... on the RSS for enhancing proximity measurements among neighbouring devices in cooperative scenarios. Our main contribution is represented by experimental investigations. Analysis of the errors introduced in the distance estimation using path-loss-based methods has been carried out. Moreover, the exploitation...

  8. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  9. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    Science.gov (United States)

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-02

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    Science.gov (United States)

    Biggs, Adam T

    2017-07-01

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  11. Range-limited centrality measures in complex networks

    Science.gov (United States)

    Ercsey-Ravasz, Mária; Lichtenwalter, Ryan N.; Chawla, Nitesh V.; Toroczkai, Zoltán

    2012-06-01

    Here we present a range-limited approach to centrality measures in both nonweighted and weighted directed complex networks. We introduce an efficient method that generates for every node and every edge its betweenness centrality based on shortest paths of lengths not longer than ℓ=1,...,L in the case of nonweighted networks, and for weighted networks the corresponding quantities based on minimum weight paths with path weights not larger than wℓ=ℓΔ, ℓ=1,2...,L=R/Δ. These measures provide a systematic description on the positioning importance of a node (edge) with respect to its network neighborhoods one step out, two steps out, etc., up to and including the whole network. They are more informative than traditional centrality measures, as network transport typically happens on all length scales, from transport to nearest neighbors to the farthest reaches of the network. We show that range-limited centralities obey universal scaling laws for large nonweighted networks. As the computation of traditional centrality measures is costly, this scaling behavior can be exploited to efficiently estimate centralities of nodes and edges for all ranges, including the traditional ones. The scaling behavior can also be exploited to show that the ranking top list of nodes (edges) based on their range-limited centralities quickly freezes as a function of the range, and hence the diameter-range top list can be efficiently predicted. We also show how to estimate the typical largest node-to-node distance for a network of N nodes, exploiting the afore-mentioned scaling behavior. These observations were made on model networks and on a large social network inferred from cell-phone trace logs (˜5.5×106 nodes and ˜2.7×107 edges). Finally, we apply these concepts to efficiently detect the vulnerability backbone of a network (defined as the smallest percolating cluster of the highest betweenness nodes and edges) and illustrate the importance of weight-based centrality measures in

  12. Mode-locked laser autocollimator with an expanded measurement range.

    Science.gov (United States)

    Chen, Yuan-Liu; Shimizu, Yuki; Kudo, Yukitoshi; Ito, So; Gao, Wei

    2016-07-11

    A mode-locked laser is employed as the light source of a laser autocollimator, instead of the conventionally employed single-wavelength laser, for an expanded range of tilt angle measurement. A group of the spatially separated diffracted beams from a diffraction grating are focused by a collimator objective to form an array of light spots on the focal plane of the collimator objective where a light position-sensing photodiode is located for detecting the linear displacement of the light spot array corresponding to the tilt angle of the reflector. A prototype mode-locked femtosecond laser autocollimator is designed and constructed for achieving a measurement range of 11000 arc-seconds.

  13. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...... to a level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained...

  14. Development of New Measurement System of Errors in the Multiaxial Machine Tool for an Active Compensation

    Directory of Open Access Journals (Sweden)

    Noureddine Barka

    2016-01-01

    Full Text Available Error compensation techniques have been widely applied to improve multiaxis machine accuracy. However, due to the lack of reliable instrumentation for direct and overall measurements, all the compensation methods are based on offline measurements of each error component separately. The results of these measurements are static in nature and can only reflect the conditions at the moment of measurement. These results are not representative under real working conditions because of disturbances from load deformations, thermal distortions, and dynamic perturbations. This present approach involves the development of a new measurement system capable of dynamically evaluating the errors according to the six degrees of freedom. The developed system allows the generation of useful data that cover all machine states regardless of the operating conditions. The obtained measurements can be used to evaluate the performance of the machine, calibration, and real time compensation of errors. This system is able to perform dynamic measurements reflecting the global accuracy of the machine tool without a long and expensive analysis of various error sources contribution. Finally, the system exhibits compatible metrological characteristics with high precision applications.

  15. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...... in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders Udgivelsesdato: 2009/3/30...

  16. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  17. Linear mixed models for replication data to efficiently allow for covariate measurement error.

    Science.gov (United States)

    Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris

    2009-11-10

    It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.

  18. Uncertainty in Measurement and Total Error: Tools for Coping with Diagnostic Uncertainty.

    Science.gov (United States)

    Theodorsson, Elvar

    2017-03-01

    Laboratory medicine decreases diagnostic uncertainty, but is influenced by factors causing uncertainties. Error and uncertainty methods are commonly seen as incompatible in laboratory medicine. New versions of the Guide to the Expression of Uncertainty in Measurement and International Vocabulary of Metrology will incorporate both uncertainty and error methods, which will assist collaboration between metrology and laboratories. Law of propagation of uncertainty and bayesian statistics are theoretically preferable to frequentist statistical methods in diagnostic medicine. However, frequentist statistics are better known and more widely practiced. Error and uncertainty methods should both be recognized as legitimate for calculating diagnostic uncertainty. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  19. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  20. The immediate effects of ankle balance taping with kinesiology tape on ankle active range of motion and performance in the Balance Error Scoring System.

    Science.gov (United States)

    Lee, Sun-Min; Lee, Jung-Hoon

    2017-05-01

    This study investigated the changes in ankle active range of motion (AROM) and performance on the Balance Error Scoring System (BESS) in cases in which no tape, placebo taping or ankle balance taping (ABT) with kinesiology tape was used. Randomized cross-over trial. University laboratory. Fifteen physically active individuals (7 men, 8 women). Postural control was assessed based on performances on the BESS. Active ankle flexibility was assessed by measuring the ankle AROM of both ankles under each taping condition in a random order at 1-week intervals. The ankle AROM among the taping conditions were not significantly different. There were no significant differences in the error scores of single-leg and tandem stances on a firm surface among the taping conditions. Compared to those obtained in the absence of taping, the error scores of the single-leg and tandem stances on a foam surface were significantly lower with ABT, but they did not significantly differ from the placebo taping scores. This study showed that ABT with kinesiology tape immediately improved postural control on unstable surfaces without changes in ankle AROM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Obesity increases precision errors in total body dual-energy x-ray absorptiometry measurements.

    Science.gov (United States)

    Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Shallcross, Andrew; Fogelman, Ignac; Blake, Glen M

    2015-01-01

    Total body (TB) dual-energy X-ray absorptiometry (DXA) is increasingly being used to measure body composition in research and clinical settings. This study investigated the effect of body mass index (BMI) and body fat on precision errors for total and regional TB DXA measurements of bone mineral density, fat tissue, and lean tissue using the GE Lunar Prodigy (GE Healthcare, Bedford, UK). One hundred forty-four women with BMI's ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the TB with repositioning between examinations. Participants were divided into 3 groups based on their BMI, and the root mean square standard deviation and the percentage coefficient of variation were calculated for each group. The root mean square standard deviation (percentage coefficient of variation) for the normal (obese (>30 kg/m²; n = 32) BMI groups, respectively, were total BMD (g/cm(2)): 0.009 (0.77%), 0.009 (0.69%), 0.011 (0.91%); total fat (g): 545 (2.98%), 486 (1.72%), 677 (1.55%); total lean (g): 551 (1.42%), 540 (1.34%), and 781 (1.68%). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2015 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  2. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    Science.gov (United States)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  3. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  4. An Observability Metric for Underwater Vehicle Localization Using Range Measurements

    Science.gov (United States)

    Arrichiello, Filippo; Antonelli, Gianluca; Aguiar, Antonio Pedro; Pascoal, Antonio

    2013-01-01

    The paper addresses observability issues related to the general problem of single and multiple Autonomous Underwater Vehicle (AUV) localization using only range measurements. While an AUV is submerged, localization devices, such as Global Navigation Satellite Systems, are ineffective, due to the attenuation of electromagnetic waves. AUV localization based on dead reckoning techniques and the use of affordable motion sensor units is also not practical, due to divergence caused by sensor bias and drift. For these reasons, localization systems often build on trilateration algorithms that rely on the measurements of the ranges between an AUV and a set of fixed transponders using acoustic devices. Still, such solutions are often expensive, require cumbersome calibration procedures and only allow for AUV localization in an area that is defined by the geometrical arrangement of the transponders. A viable alternative for AUV localization that has recently come to the fore exploits the use of complementary information on the distance from the AUV to a single transponder, together with information provided by on-board resident motion sensors, such as, for example, depth, velocity and acceleration measurements. This concept can be extended to address the problem of relative localization between two AUVs equipped with acoustic sensors for inter-vehicle range measurements. Motivated by these developments, in this paper, we show that both the problems of absolute localization of a single vehicle and the relative localization of multiple vehicles can be treated using the same mathematical framework, and tailoring concepts of observability derived for nonlinear systems, we analyze how the performance in localization depends on the types of motion imparted to the AUVs. For this effect, we propose a well-defined observability metric and validate its usefulness, both in simulation and by carrying out experimental tests with a real marine vehicle during which the performance of an

  5. An Observability Metric for Underwater Vehicle Localization Using Range Measurements

    Directory of Open Access Journals (Sweden)

    Filippo Arrichiello

    2013-11-01

    Full Text Available The paper addresses observability issues related to the general problem of single and multiple Autonomous Underwater Vehicle (AUV localization using only range measurements. While an AUV is submerged, localization devices, such as Global Navigation Satellite Systems, are ineffective, due to the attenuation of electromagnetic waves. AUV localization based on dead reckoning techniques and the use of affordable motion sensor units is also not practical, due to divergence caused by sensor bias and drift. For these reasons, localization systems often build on trilateration algorithms that rely on the measurements of the ranges between an AUV and a set of fixed transponders using acoustic devices. Still, such solutions are often expensive, require cumbersome calibration procedures and only allow for AUV localization in an area that is defined by the geometrical arrangement of the transponders. A viable alternative for AUV localization that has recently come to the fore exploits the use of complementary information on the distance from the AUV to a single transponder, together with information provided by on-board resident motion sensors, such as, for example, depth, velocity and acceleration measurements. This concept can be extended to address the problem of relative localization between two AUVs equipped with acoustic sensors for inter-vehicle range measurements. Motivated by these developments, in this paper, we show that both the problems of absolute localization of a single vehicle and the relative localization of multiple vehicles can be treated using the same mathematical framework, and tailoring concepts of observability derived for nonlinear systems, we analyze how the performance in localization depends on the types of motion imparted to the AUVs. For this effect, we propose a well-defined observability metric and validate its usefulness, both in simulation and by carrying out experimental tests with a real marine vehicle during which the

  6. Linear and nonlinear magnetic error measurements using action and phase jump analysis

    Directory of Open Access Journals (Sweden)

    Javier F. Cardona

    2009-01-01

    Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.

  7. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  8. [Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].

    Science.gov (United States)

    Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong

    2015-11-01

    A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.

  9. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  10. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  11. Optical measurements of long-range protein vibrations

    Science.gov (United States)

    Acbas, Gheorghe; Niessen, Katherine A.; Snell, Edward H.; Markelz, A. G.

    2014-01-01

    Protein biological function depends on structural flexibility and change. From cellular communication through membrane ion channels to oxygen uptake and delivery by haemoglobin, structural changes are critical. It has been suggested that vibrations that extend through the protein play a crucial role in controlling these structural changes. While nature may utilize such long-range vibrations for optimization of biological processes, bench-top characterization of these extended structural motions for engineered biochemistry has been elusive. Here we show the first optical observation of long-range protein vibrational modes. This is achieved by orientation-sensitive terahertz near-field microscopy measurements of chicken egg white lysozyme single crystals. Underdamped modes are found to exist for frequencies >10 cm-1. The existence of these persisting motions indicates that damping and intermode coupling are weaker than previously assumed. The methodology developed permits protein engineering based on dynamical network optimization.

  12. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Dynamic Data Filtering of Long-Range Doppler LiDAR Wind Speed Measurements

    Directory of Open Access Journals (Sweden)

    Hauke Beck

    2017-06-01

    Full Text Available Doppler LiDARs have become flexible and versatile remote sensing devices for wind energy applications. The possibility to measure radial wind speed components contemporaneously at multiple distances is an advantage with respect to meteorological masts. However, these measurements must be filtered due to the measurement geometry, hard targets and atmospheric conditions. To ensure a maximum data availability while producing low measurement errors, we introduce a dynamic data filter approach that conditionally decouples the dependency of data availability with increasing range. The new filter approach is based on the assumption of self-similarity, that has not been used so far for LiDAR data filtering. We tested the accuracy of the dynamic data filter approach together with other commonly used filter approaches, from research and industry applications. This has been done with data from a long-range pulsed LiDAR installed at the offshore wind farm ‘alpha ventus’. There, an ultrasonic anemometer located approximately 2.8 km from the LiDAR was used as reference. The analysis of around 1.5 weeks of data shows, that the error of mean radial velocity can be minimised for wake and free stream conditions.

  14. Least-MSE calibration procedures for corrections of measurement and misclassification errors in generalized linear models

    Directory of Open Access Journals (Sweden)

    Parnchit Wattanasaruch

    2012-09-01

    Full Text Available The analyses of clinical and epidemiologic studies are often based on some kind of regression analysis, mainly linearregression and logistic models. These analyses are often affected by the fact that one or more of the predictors are measuredwith error. The error in the predictors is also known to bias the estimates and hypothesis testing results. One of the proceduresfrequently used to handle such problem in order to reduce the measurement errors is the method of regression calibration forpredicting the continuous covariate. The idea is to predict the true value of error-prone predictor from the observed data, thento use the predicted value for the analyses. In this research we develop four calibration procedures, namely probit, complementary log-log, logit, and logistic calibration procedures for corrections of the measurement error and/or the misclassification error to predict the true values for the misclassification explanatory variables used in generalized linear models. Theprocesses give the predicted true values of a binary explanatory variable using the calibration techniques then use thesepredicted values to fit the three models such that the probit, the complementary log-log, and the logit models under the binaryresponse. All of which are investigated by considering the mean square error (MSE in 1,000 simulation studies in each caseof the known parameters and conditions. The results show that the proposed working calibration techniques that can performadequately well are the probit, logistic, and logit calibration procedures. Both the probit calibration procedure and the probitmodel are superior to the logistic and logit calibrations due to the smallest MSE. Furthermore, the probit model-parameterestimates also improve the effects of the misclassification explanatory variable. Only the complementary log-log model andits calibration technique are appropriate when measurement error is moderate and sample size is high.

  15. Quantitative shearography: error reduction by using more than three measurement channels

    Energy Technology Data Exchange (ETDEWEB)

    Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-10

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  16. Quantitative shearography: error reduction by using more than three measurement channels.

    Science.gov (United States)

    Charrett, Tom O H; Francis, Daniel; Tatam, Ralph P

    2011-01-10

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  17. The impact of crown-rump length measurement error on combined Down syndrome screening: a simulation study.

    Science.gov (United States)

    Salomon, L J; Bernard, M; Amarsy, R; Bernard, J P; Ville, Y

    2009-05-01

    To evaluate the impact of a 5-mm error in the measurement of crown-rump length (CRL) in a woman undergoing ultrasound and biochemistry sequential combined screening for Down syndrome. Based on existing risk calculation algorithms, we simulated the case of a 35-year-old-woman undergoing combined screening based on nuchal translucency (NT) measurement and early second-trimester maternal serum markers (human chorionic gonadotropin (hCG) and alpha-fetoprotein (AFP) expressed as multiples of the median (MoM)). Two measurement errors were considered (+ or - 5 mm), for four different CRLs (50, 60, 70 and 80 mm), with five different NT measurements (1, 1.5, 2, 2.5 and 3 mm) in a patient undergoing biochemistry testing at 14 + 4, 15, 16, 17 or 18 weeks' gestation. Four different values for each maternal serum marker were tested (1, 1.5, 2 and 2.5 MoM for hCG, and 0.5, 0.8, 1 and 1.5 MoM for AFP), leading to a total of 3200 simulations of the impact of measurement error. In all cases the ratio between the risk as assessed with or without the measurement error was calculated (measurement error-related risk ratio (MERR)). Over 3200 simulated cases, MERR ranged from 0.53 to 2.14. In 586 simulations (18.3%), it was 1.33. Based on a risk cut-off of 1/300, women would have been misclassified in 112 simulations (3.5%). This would go up to 33 (27.5%) out of the 120 simulations in women with 'borderline' risk, with 1.5 MoM for hCG and 0.5 MoM for AFP, and NT measurement of 1 or 2mm. Down syndrome screening may be highly sensitive to measurement errors in CRL. Quality control of CRL measurement should be performed together with quality control of NT measurement in order to provide the highest standard of care.

  18. Measurement of individual loudness functions by trisection of loudness ranges.

    Science.gov (United States)

    Villchur, Edgar; Killion, Mead C

    2008-10-01

    Loudness-balance measurements with monaurally impaired subjects have shown that the shape of the loudness versus sound-pressure curve among hearing-impaired persons varies significantly. But the effectiveness of adjusting the compression characteristics of wide-dynamic-range compression hearing aids-the compression ratios, the variation of compression ratio with level, and the threshold of compression-to restore normal loudness growth for the individual patient has never been properly tested; individual loudness measurements have been too uncertain to permit meaningful individual adjustments. Recent investigators have reported standard deviations of such measurements in normal-hearing subjects of 6.4 dB and 7.8 dB. This investigation describes a method of measuring loudness function with a standard deviation in normal-hearing subjects of the order of 1 dB, both significantly lower than that of previous methods and sufficiently accurate for individual-subject adjustments. Each of nine normal-hearing subjects-seven of them inexperienced and one a 9-year-old was asked to make three successive loudness trisections within an amplitude range of 40 to 80 dB SPL, providing six points from which to plot a loudness-function curve between these limits. The individual and average curves were validated as accurate loudness functions by comparing them to the curve defined by the equation of loudness versus amplitude in current Standards. In a second validation experiment, the loudness functions of masked ears measured by trisection were compared to the loudness function of those ears measured by loudness balance between masked and unmasked ears. The difference between a loudness function based on the average of subject trisections and the loudness function defined by the ANSI Standard loudness equation was -1.92 dB at the lowest trisection level and +0.05 dB at the highest level. The standard deviations of subject responses were 1.63 dB for the lowest trisection level and 0.68 d

  19. Error analysis and data forecast in the centre of gravity measurement system for small tractors

    NARCIS (Netherlands)

    Jiang, J.D.; Hoogmoed, W.B.; Yingdi, Z.; Xian, Z.

    2011-01-01

    A novel centre of gravity measurement system for small tractors with the principle of the three-point reaction is presented. According to the prototype of a small tractor gravity centre test platform, a mathematic multi-body dynamics prototype was built to analyze the measurement error in the centre

  20. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  1. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  2. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  3. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...... deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors....

  4. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  6. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    errors can be largely compensated for by an absolute calibration of the transducers and inverse filtering that results in very small residual errors. Experimental results of this study indicate that these uncertainties will be in the order of one percent with respect to amplitude and two tenth......Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias...... of a degree for the phase. This implies that input power at a single point can be measured to within one dB in practical structures which possesses some damping. The uncertainty is increased, however, when sums of measured power contributions from more sources are to be minimised, as is the case in active...

  7. A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox

    Science.gov (United States)

    Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang

    2017-12-01

    Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.

  8. Measuring Systems for Thermometer Calibration in Low-Temperature Range

    Science.gov (United States)

    Szmyrka-Grzebyk, A.; Lipiński, L.; Manuszkiewicz, H.; Kowal, A.; Grykałowska, A.; Jancewicz, D.

    2011-12-01

    The national temperature standard for the low-temperature range between 13.8033 K and 273.16 K has been established in Poland at the Institute of Low Temperature and Structure Research (INTiBS). The standard consists of sealed cells for realization of six fixed points of the International Temperature Scale of 1990 (ITS-90) in the low-temperature range, an adiabatic cryostat and Isotech water and mercury triple-point baths, capsule standard resistance thermometers (CSPRT), and AC and DC bridges with standard resistors for thermometers resistance measurements. INTiBS calibrates CSPRTs at the low-temperature fixed points with uncertainties less than 1 mK. In lower temperature range—between 2.5 K and about 25 K — rhodium-iron (RhFe) resistance thermometers are calibrated by comparison with a standard which participated in the EURAMET.T-K1.1 comparison. INTiBS offers a calibration service for industrial platinum resistance thermometers and for digital thermometers between 77 K and 273 K. These types of thermometers may be calibrated at INTiBS also in a higher temperature range up to 550°C. The Laboratory of Temperature Standard at INTiBS acquired an accreditation from the Polish Centre for Accreditation. A management system according to EN ISO/IEC 17025:2005 was established at the Laboratory and presented on EURAMET QSM Forum.

  9. On the impact of covariate measurement error on spatial regression modelling.

    Science.gov (United States)

    Huque, Md Hamidul; Bondell, Howard; Ryan, Louise

    2014-12-01

    Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD).

  10. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  11. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    Science.gov (United States)

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  12. Intersatellite laser ranging with homodyne optical phase locking for Space Advanced Gravity Measurements mission.

    Science.gov (United States)

    Yeh, Hsien-Chi; Yan, Qi-Zhong; Liang, Yu-Rong; Wang, Ying; Luo, Jun

    2011-04-01

    In this paper, we present the scheme and the preliminary results of an intersatellite laser ranging system that is designed for the Earth's gravity recovery mission proposed in China, called Space Advanced Gravity Measurements (SAGM). The proposed intersatellite distance is about 100 km and the precision of inter-satellite range monitoring is 10 nm/Hz(1/2) at 0.1 Hz. To meet the needs, we designed a transponder-type intersatellite laser ranging system by using a homodyne optical phase locking technique, which is different from the heterodyne optical phase-locked loop used in GRACE follow-on mission. Since an ultrastable oscillator is unnecessary in the homodyne phase-locked loop, the measurement error caused by the frequency instability of the ultrastable oscillator need not be taken into account. In the preliminary study, a heterodyne interferometer with 10-m baseline (measurement arm-length) was built up to demonstrate the validity of the measurement scheme. The measurement results show that a resolution of displacement measurement of about 3.2 nm had been achieved. © 2011 American Institute of Physics

  13. Broadband Measurement of Aerosol Extinction in the Visible Range

    Science.gov (United States)

    He, Quanfu; Bluvshtein, Nir; Segev, Lior; Flores, Michel; Rudich, Yinon; Washenfelder, Rebecca; Brown, Steven

    2017-04-01

    Atmospheric aerosols influence the Earth's radiative budget directly by scattering and absorbing incoming solar radiation. Aerosol direct forcing remains one of the largest uncertainties in quantifying the role that aerosols play in the Earth's radiative budget. The optical properties of aerosols vary as a function of wavelength, but few measurements reported the wavelength dependence of aerosol extinction cross section and complex refractive indices, particularly in the blue and visible spectral range. There is also currently a large gap in our knowledge of how the optical properties evolve as a function of atmospheric aging in the visible spectrum. In this study, we constructed a new and novel laboratory instrument to measure aerosol extinction as a function of wavelength, using cavity enhanced spectroscopy with a white light source. This broadband cavity enhanced spectroscopy (BBCES) covers the 395-700 nm spectral region using a broadband light source and a grating spectrometer with charge-coupled device detector (CCD). We evaluated this BBCES by measuring extinction cross section for aerosols that are pure scattering, slightly absorbing and strongly absorbing atomized from standard materials. We also retrieved the refractive indices from the measured extinction cross sections. Secondary organic aerosols from biogenic and anthropogenic precursors were "aged" to differential time scales (1 to 10 days) in an Oxidation Flow Reactor (OFR) under the combined influence of OH, O3 and UV light. The new BBCES was used to online measure the extinction cross sections of the SOA. This talk will provide a comprehensive understanding of aerosol optical properties alerting during aging process in the 395 - 700 nm spectrum.

  14. Inclinometer Assembly Error Calibration and Horizontal Image Correction in Photoelectric Measurement Systems

    Directory of Open Access Journals (Sweden)

    Xiaofang Kong

    2018-01-01

    Full Text Available Inclinometer assembly error is one of the key factors affecting the measurement accuracy of photoelectric measurement systems. In order to solve the problem of the lack of complete attitude information in the measurement system, this paper proposes a new inclinometer assembly error calibration and horizontal image correction method utilizing plumb lines in the scenario. Based on the principle that the plumb line in the scenario should be a vertical line on the image plane when the camera is placed horizontally in the photoelectric system, the direction cosine matrix between the geodetic coordinate system and the inclinometer coordinate system is calculated firstly by three-dimensional coordinate transformation. Then, the homography matrix required for horizontal image correction is obtained, along with the constraint equation satisfying the inclinometer-camera system requirements. Finally, the assembly error of the inclinometer is calibrated by the optimization function. Experimental results show that the inclinometer assembly error can be calibrated only by using the inclination angle information in conjunction with plumb lines in the scenario. Perturbation simulation and practical experiments using MATLAB indicate the feasibility of the proposed method. The inclined image can be horizontally corrected by the homography matrix obtained during the calculation of the inclinometer assembly error, as well.

  15. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Energy Technology Data Exchange (ETDEWEB)

    DeSalvo, Riccardo, E-mail: Riccardo.desalvo@gmail.com [California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330-8332 (United States); University of Sannio, Corso Garibaldi 107, Benevento 82100 (Italy)

    2015-06-26

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G.

  16. Measurement of peak impact loads differ between accelerometers - Effects of system operating range and sampling rate.

    Science.gov (United States)

    Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C

    2017-06-14

    A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  17. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  18. A New Algorithm of Compensation of the Time Interval Error GPS-Based Measurements

    Directory of Open Access Journals (Sweden)

    Jonny Paul ZAVALA DE PAZ

    2010-01-01

    Full Text Available In this paper we present a new algorithm of compensation of the time interval error (TIE applying an unbiased p-step predictive finite impulse response (FIR filter at the signal of the receiver Global Positioning System (GPS-based measurements. The practical use of the system GPS involves various inherent problems of the signal. Two of the most important problems are the TIE and the instantaneous loss of the signal of the GPS by a small interval of time, called "holdover". The error holdover is a problem that at present does not possess solution and the systems that present this type of error produce lines of erroneous synchronization in the signal of the GPS. Basic holdover algorithms are discussed along with their most critical properties. Efficiency of the predictive filter in holdover is demonstrated in applications to GPS-based measurements of the TIE.

  19. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  20. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  1. Interpolation techniques to reduce error in measurement of toe clearance during obstacle avoidance.

    Science.gov (United States)

    Heijnen, Michel J H; Muir, Brittney C; Rietdyk, Shirley

    2012-01-03

    Foot and toe clearance (TC) are used regularly to describe locomotor control for both clinical and basic research. However, accuracy of TC during obstacle crossing can be compromised by typical sample frequencies, which do not capture the frame when the foot is over the obstacle due to high limb velocities. The purpose of this study was to decrease the error of TC measures by increasing the spatial resolution of the toe trajectory with interpolation. Five young subjects stepped over an obstacle in the middle of an 8 m walkway. Position data were captured at 600 Hz as a gold standard signal (GS-600-Hz). The GS-600-Hz signal was downsampled to 60 Hz (DS-60-Hz). The DS-60-Hz was then interpolated by either upsampling or an algorithm. Error was calculated as the absolute difference in TC between GS-600-Hz and each of the remaining signals, for both the leading limb and the trailing limb. All interpolation methods reduced the TC error to a similar extent. Interpolation reduced the median error of trail TC from 5.4 to 1.1 mm; the maximum error was reduced from 23.4 to 4.2 mm (16.6-3.8%). The median lead TC error improved from 1.6 to 0.5 mm, and the maximum error improved from 9.1 to 1.8 mm (5.3-0.9%). Therefore, interpolating a 60 Hz signal is a valid technique to decrease the error of TC during obstacle crossing. Published by Elsevier Ltd.

  2. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  3. Tunnel and Subsurface Void Detection and Range to Target Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Phillip B. West

    2009-06-01

    Engineers and technicians at the Idaho National Laboratory invented, designed, built and tested a device capable of detecting and measuring the distance to, an underground void, or tunnel. Preliminary tests demonstrated positive detection of, and range to, a void thru as much as 30 meters of top-soil earth. Device uses acoustic driving point impedance principles pioneered by the Laboratory for well-bore physical properties logging. Data receipts recorded by the device indicates constructive-destructive interference patterns characteristic of acoustic wave reflection from a downward step-change in impedance mismatch. Prototype tests demonstrated that interference patterns in receipt waves could depict the patterns indicative of specific distances. A tool with this capability can quickly (in seconds) indicate the presence and depth/distance of a void or tunnel. Using such a device, border security and military personnel can identify threats of intrusion or weapons caches in most all soil conditions including moist and rocky.

  4. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  5. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    Science.gov (United States)

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  6. Measurement of straightness without Abbe error using an enhanced differential plane mirror interferometer.

    Science.gov (United States)

    Jin, Tao; Ji, Hudong; Hou, Wenmei; Le, Yanfen; Shen, Lu

    2017-01-20

    This paper presents an enhanced differential plane mirror interferometer with high resolution for measuring straightness. Two sets of space symmetrical beams are used to travel through the measurement and reference arms of the straightness interferometer, which contains three specific optical devices: a Koster prism, a wedge prism assembly, and a wedge mirror assembly. Changes in the optical path in the interferometer arms caused by straightness are differential and converted into phase shift through a particular interferometer system. The interferometric beams have a completely common path and space symmetrical measurement structure. The crosstalk of the Abbe error caused by pitch, yaw, and roll angle is avoided. The dead path error is minimized, which greatly enhances the stability and accuracy of the measurement. A measurement resolution of 17.5 nm is achieved. The experimental results fit well with the theoretical analysis.

  7. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  8. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method.

    Science.gov (United States)

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument.

  9. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  10. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...

  11. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available , eccentricity and pyramidal errors of the measuring faces. Deviations in the flatness of angle surfaces have been held responsible for the lack of agreement in angle comparisons. An investigation has been carried out using a small-angle generator...

  12. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    Science.gov (United States)

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  13. Quantum Non-Demolition Singleshot Parity Measurements for a Proposed Quantum Error Correction Scheme

    Science.gov (United States)

    Petrenko, Andrei; Sun, Luyan; Leghtas, Zaki; Vlastakis, Brian; Kirchmair, Gerhard; Sliwa, Katrina; Narla, Anirudh; Hatridge, Michael; Shankar, Shyam; Blumoff, Jacob; Frunzio, Luigi; Mirrahimi, Mazyar; Devoret, Michel; Schoelkopf, Robert

    2014-03-01

    In order to be effective, a quantum error correction scheme(QEC) requires measurements of an error syndrome to be Quantum Non-Demolition (QND) and fast compared to the rate at which errors occur. Employing a superconducting circuit QED architecture, the parity of a superposition of coherent states in a cavity, or cat states, is the error syndrome for a recently proposed QEC scheme. We demonstrate the tracking of parity of cat states in a cavity and observe individual jumps of party in real-time with singleshot measurements that are much faster than the lifetime of the cavity. The projective nature of these measurements is evident when inspecting individual singleshot traces, yet when averaging the traces as an ensemble the average parity decays as predicted for a coherent state. We find our protocol to be 99.8% QND per measurement, and our sensitivity to parity jumps to be very high at 96% for an average photon number n = 1 in the cavity (85% for n = 4). Such levels of performance can already increase the lifetime of a quantum bit of information, and thereby present a promising step towards realizing a viable QEC scheme.

  14. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review

    NARCIS (Netherlands)

    Kooij, Y.E. van; Fink, A.; Nijhuis-Van der Sanden, M.W.; Speksnijder, C.M.

    2017-01-01

    STUDY DESIGN: Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. METHODS: Databases were searched for articles with key words "hand,"

  15. The reliability and measurement error of protractor-based goniometry of the fingers : A systematic review

    NARCIS (Netherlands)

    van Kooij, Yara E.; Fink, Alexandra; Nijhuis-van der Sanden, Maria W.; Speksnijder, Caroline M.|info:eu-repo/dai/nl/304821535

    2017-01-01

    Study Design: Systematic review. Purpose of the Study: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Methods: Databases were searched for articles with key words "hand,"

  16. Error Bounds Due to Random Noise in Cylindrical Near-Field Measurements

    OpenAIRE

    Romeu Robert, Jordi; Jofre Roca, Lluís

    1991-01-01

    The far field errors due to near field random noise are statistically bounded when performing cylindrical near to far field transform. In this communication, the far field noise variance it is expressed as a function of the measurement parameters and the near field noise variance. Peer Reviewed

  17. A Study on Sixth Grade Students' Misconceptions and Errors in Spatial Measurement: Length, Area, and Volume

    Science.gov (United States)

    Tan Sisman, Gulcin; Aksu, Meral

    2016-01-01

    The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…

  18. Estimating the independent effects of multiple pollutants in the presence of measurement error: an application of a measurement-error-resistant technique.

    Science.gov (United States)

    Zeka, Ariana; Schwartz, Joel

    2004-12-01

    Misclassification of exposure usually leads to biased estimates of exposure-response associations. This is particularly an issue in cases with multiple correlated exposures, where the direction of bias is uncertain. It is necessary to address this problem when considering associations with important public health implications such as the one between mortality and air pollution, because biased exposure effects can result in biased risk assessments. The National Morbidity and Mortality Air Pollution Study (NMMAPS) recently reported results from an assessment of multiple pollutants and daily mortality in 90 U.S. cities. That study assessed the independent associations of the selected pollutants with daily mortality in two-pollutant models. Excess mortality was associated with particulate matter of aerodynamic diameter less than or equal to 10 microm/m3 (PM10), but not with other pollutants, in these two pollutant models. The extent of bias due to measurement error in these reported results is unclear. Schwartz and Coull recently proposed a method that deals with multiple exposures and, under certain conditions, is resistant to measurement error. We applied this method to reanalyze the data from NMMAPS. For PM10, we found results similar to those reported previously from NMMAPS (0.24% increase in deaths per 10-microg/m3) increase in PM10). In addition, we report an important effect of carbon monoxide that had not been observed previously.

  19. Feasibility of RACT for 3D dose measurement and range verification in a water phantom.

    Science.gov (United States)

    Alsanea, Fahed; Moskvin, Vadim; Stantz, Keith M

    2015-02-01

    The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly impact beam commissioning, treatment verification during particle beam therapy and image guided techniques.

  20. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  1. A new method to reduce truncation errors in partial spherical near-field measurements

    DEFF Research Database (Denmark)

    Cano-Facila, F J; Pivnenko, Sergey

    2011-01-01

    angular sector as well as a truncation error is present in the calculated far-field pattern within this sector. The method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward......A new and effective method for reduction of truncation errors in partial spherical near-field (SNF) measurements is proposed. The method is useful when measuring electrically large antennas, where the measurement time with the classical SNF technique is prohibitively long and an acquisition over...... hemisphere. To verify the effectiveness of the method, several examples are presented using both simulated and measured truncated near-field data....

  2. Influenza infection rates, measurement errors and the interpretation of paired serology.

    Directory of Open Access Journals (Sweden)

    Simon Cauchemez

    Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.

  3. Measurement error causes scale-dependent threshold erosion of biological signals in animal movement data.

    Science.gov (United States)

    Bradshaw, Corey J A; Sims, David W; Hays, Graeme C

    2007-03-01

    Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on

  4. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  5. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    Science.gov (United States)

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  6. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects

    Directory of Open Access Journals (Sweden)

    Karyn Heavner

    2015-08-01

    Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  7. Modeling and Error Compensation of Robotic Articulated Arm Coordinate Measuring Machines Using BP Neural Network

    Directory of Open Access Journals (Sweden)

    Guanbin Gao

    2017-01-01

    Full Text Available Articulated arm coordinate measuring machine (AACMM is a specific robotic structural instrument, which uses D-H method for the purpose of kinematic modeling and error compensation. However, it is difficult for the existing error compensation models to describe various factors, which affects the accuracy of AACMM. In this paper, a modeling and error compensation method for AACMM is proposed based on BP Neural Networks. According to the available measurements, the poses of the AACMM are used as the input, and the coordinates of the probe are used as the output of neural network. To avoid tedious training and improve the training efficiency and prediction accuracy, a data acquisition strategy is developed according to the actual measurement behavior in the joint space. A neural network model is proposed and analyzed by using the data generated via Monte-Carlo method in simulations. The structure and parameter settings of neural network are optimized to improve the prediction accuracy and training speed. Experimental studies have been conducted to verify the proposed algorithm with neural network compensation, which shows that 97% error of the AACMM can be eliminated after compensation. These experimental results have revealed the effectiveness of the proposed modeling and compensation method for AACMM.

  8. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    Energy Technology Data Exchange (ETDEWEB)

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  10. Partial compensation interferometry for measurement of surface parameter error of high-order aspheric surfaces

    Science.gov (United States)

    Hao, Qun; Li, Tengfei; Hu, Yao

    2018-01-01

    Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.

  11. Out-of-squareness measurement on ultra-precision machine based on the error separation

    Science.gov (United States)

    Lai, Tao; Liu, Junfeng; Chen, Shanyong; Guan, Chaoliang; Tie, Guipeng; Liao, Quan

    2017-06-01

    Traditional methods of measuring out-of-squareness of ultra-precision motion stage have many limitations, especially the errors caused by inaccuracy of standard specimens, such as bare L-square and optical pentaprism. And generally, the accurate of out-of-squareness measurement is lower than the accurate of interior angles of standard specimens. Based on the error separation, this paper presents a novel method of out-of-squareness measurement with a polygon artifact. The angles bounded with the guideways and the edges of polygon artifact are measured, and the out-of-squareness distraction is achieved by the principle that the sum of internal the angles of a convex polygon artifact is (n-2)π. A out-of-squareness metrical experiment is carried out on the profilometer by using an optical square brick with the out-of-squareness of interior angles at about 1140.2 arcsec. The results show that the measurement accuracy of three out-of-squareness of the profilometer is not affected by the internal angles. The measurementwith the method can be applied to measure the machine error more accurate and calibrate the out-of-squareness of machine.

  12. Estimating personal exposures from ambient air pollution measures: using meta-analysis to assess measurement error.

    Science.gov (United States)

    Holliday, Katelyn M; Avery, Christy L; Poole, Charles; McGraw, Kathleen; Williams, Ronald; Liao, Duanping; Smith, Richard L; Whitsel, Eric A

    2014-01-01

    Although ambient concentrations of particulate matter ≤10 μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on health outcome-PM exposure relationships remain poorly understood. We conducted a random-effects meta-analysis to estimate effects of study, participant, and environmental factors on r; used the estimates to impute personal exposure from ambient PM10 concentrations among 4,012 nonsmoking, participants with diabetes in the Women's Health Initiative clinical trial; and then estimated the associations of ambient and imputed personal PM10 concentrations with electrocardiographic measures, such as heart rate variability. We identified 15 studies (in years 1990-2009) of 342 participants in five countries. The median r was 0.46 (range = 0.13 to 0.72). There was little evidence of funnel plot asymmetry but substantial heterogeneity of r, which increased 0.05 (95% confidence interval = 0.01 to 0.09) per 10 µg/m increase in mean ambient PM10 concentration. Substituting imputed personal exposure for ambient PM10 concentrations shifted mean percent changes in electrocardiographic measures per 10 µg/m increase in exposure away from the null and decreased their precision, for example, -2.0% (-4.6% to 0.7%) versus -7.9% (-15.9% to 0.9%), for the standard deviation of normal-to-normal RR interval duration. Analogous distributions and heterogeneity of r in extant meta-analyses of ambient and personal PM2.5 concentrations suggest that observed shifts in mean percent change and decreases in precision may be generalizable across particle size.

  13. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  14. Testing capability indices for one-sided processes with measurement errors

    Directory of Open Access Journals (Sweden)

    Grau D.

    2013-01-01

    Full Text Available In the manufacturing industry, many product characteristics are of one-sided tolerances. The process capability indices Cpu (u, v and Cpl (u, v can be used to measure process performance. Most research work related to capability indices assumes no gauge measurement errors. This assumption insufficiently reflects real situations even when advanced measuring instruments are used. In this paper we show that using a critical value without taking into account these errors, severely underestimates the α-risk which causes a less accurate testing capacity. In order to improve the results we suggest the use of an adjusted critical value, and we give a Maple program to get it. An example in a polymer granulates manufactory is presented to illustrate this approach.

  15. Human-Induced Effects on RSS Ranging Measurements for Cooperative Positioning

    Directory of Open Access Journals (Sweden)

    Francescantonio Della Rosa

    2012-01-01

    Full Text Available We present experimental evaluations of human-induced perturbations on received-signal-strength-(RSS- based ranging measurements for cooperative mobile positioning. To the best of our knowledge, this work is the first attempt to gain insight and understand the impact of both body loss and hand grip on the RSS for enhancing proximity measurements among neighbouring devices in cooperative scenarios. Our main contribution is represented by experimental investigations. Analysis of the errors introduced in the distance estimation using path-loss-based methods has been carried out. Moreover, the exploitation of human-induced perturbations for enhancing the final positioning accuracy through cooperative schemes has been assessed. It has been proved that the effect of cooperation is very limited if human factors are not taken into account when performing experimental activities.

  16. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  17. Correction of the temperature dependent error in a correlation based time-of-flight system by measuring the distortion of the correlation signal

    Science.gov (United States)

    Hofbauer, M.; Seiter, J.; Davidovic, M.; Zimmermann, H.

    2013-04-01

    Correlation based time-of-flight systems suffer from a temperature dependent distance measurement error induced by the illumination source of the system. A change of the temperature of the illumination source, results in the change of the bandwidth of the used light emitters, which are light emitting diodes (LEDs) most of the time. For typical illumination sources this can result in a drift of the measured distance in the range of ~20 cm, especially during the heat up phase. Due to the change of the bandwidth of the LEDs the shape of the output signal changes as well. In this paper we propose a method to correct this temperature dependent error by investigating this change of the shape of the output signal. Our measurements show, that the presented approach is capable of correcting the temperature dependent error in a large range of operation without the need for additional hardware.

  18. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    Science.gov (United States)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  20. Algorithms for High-speed Generating CRC Error Detection Coding in Separated Ultra-precision Measurement

    Science.gov (United States)

    Zhi, Z.; Tan, J. B.; Huang, X. D.; Chen, F. F.

    2006-10-01

    In order to solve the contradiction between error detection, transmission rate and system resources in data transmission of ultra-precision measurement, a kind of algorithm for high-speed generating CRC code has been put forward in this paper. Theoretical formulae for calculating CRC code of 16-bit segmented data are obtained by derivation. On the basis of 16-bit segmented data formulae, Optimized algorithm for 32-bit segmented data CRC coding is obtained, which solve the contradiction between memory occupancy and coding speed. Data coding experiments are conducted triumphantly by using high-speed ARM embedded system. The results show that this method has features of high error detecting ability, high speed and saving system resources, which improve Real-time Performance and Reliability of the measurement data communication.

  1. Use of graph theory measures to identify errors in record linkage.

    Science.gov (United States)

    Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B

    2014-07-01

    Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  3. The method of solution of equations with coefficients that contain measurement errors, using artificial neural network.

    Science.gov (United States)

    Zajkowski, Konrad

    This paper presents an algorithm for solving N-equations of N-unknowns. This algorithm allows to determine the solution in a situation where coefficients Ai in equations are burdened with measurement errors. For some values of Ai (where i = 1,…, N), there is no inverse function of input equations. In this case, it is impossible to determine the solution of equations of classical methods.

  4. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  5. Bayesian Semiparametric Mixture Tobit Models with Left-Censoring, Skewness and Covariate Measurement Errors

    Science.gov (United States)

    Dagne, Getachew A.; Huang, Yangxin

    2013-01-01

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection (LOD) may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models which can account for a high proportion of censored data should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left-censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left-censoring, skewness and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. PMID:23553914

  6. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  7. Regression calibration method for correcting measurement-error bias in nutritional epidemiology.

    Science.gov (United States)

    Spiegelman, D; McDermott, A; Rosner, B

    1997-04-01

    Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros.

  8. Measurements of pulse rate using long-range imaging photoplethysmography and sunlight illumination outdoors

    Science.gov (United States)

    Blackford, Ethan B.; Estepp, Justin R.

    2017-02-01

    Imaging photoplethysmography, a method using imagers to record absorption variations caused by microvascular blood volume pulsations, shows promise as a non-contact cardiovascular sensing technology. The first long-range imaging photoplethysmography measurements at distances of 25, 50, and 100 meters from the participant was recently demonstrated. Degraded signal quality was observed with increasing imager-to-subject distances. The degradation in signal quality was hypothesized to be largely attributable to inadequate light return to the image sensor with increasing lens focal length. To test this hypothesis, a follow-up evaluation with 27 participants was conducted outdoors with natural sunlight illumination resulting in 5-33 times the illumination intensity. Video was recorded from cameras equipped with ultra-telephoto lenses and positioned at distances of 25, 50, 100, and 150 meters. The brighter illumination allowed high-definition video recordings at increased frame rates of 60fps, shorter exposure times, and lower ISO settings, leading to higher quality image formation than the previous indoor evaluation. Results were compared to simultaneous reference measurements from electrocardiography. Compared to the previous indoor study, we observed lower overall error in pulse rate measurement with the same pattern of degradation in signal quality with respect to increasing distance. This effect was corroborated by the signal-to-noise ratio of the blood volume pulse signal which also showed decreasing quality with respect to increasing distance. Finally, a popular chrominance-based method was compared to a blind source separation approach; while comparable in measurement of signal-to-noise ratio, we observed higher overall error in pulse rate measurement using the chrominance method in this data.

  9. Snow Precipitation Measured by Gauges: Systematic Error Estimation and Data Series Correction in the Central Italian Alps

    Directory of Open Access Journals (Sweden)

    Giovanna Grossi

    2017-06-01

    Full Text Available Precipitation measurements by rain gauges are usually affected by a systematic underestimation, which can be larger in case of snowfall. The wind, disturbing the trajectory of the falling water droplets or snowflakes above the rain gauge, is the major source of error, but when tipping-bucket recording gauges are used, the induced evaporation due to the heating device must also be taken into account. Manual measurements of fresh snow water equivalent (SWE were taken in Alpine areas of Valtellina and Vallecamonica, in Northern Italy, and compared with daily precipitation and melted snow measured by manual precipitation gauges and by mechanical and electronic heated tipping-bucket recording gauges without any wind-shield: all of these gauges underestimated the SWE in a range between 15% and 66%. In some experimental monitoring sites, instead, electronic weighing storage gauges with Alter-type wind-shields are coupled with snow pillows data: daily SWE measurements from these instruments are in good agreement. In order to correct the historical data series of precipitation affected by systematic errors in snowfall measurements, a simple ‘at-site’ and instrument-dependent model was first developed that applies a correction factor as a function of daily air temperature, which is an index of the solid/liquid precipitation type. The threshold air temperatures were estimated through a statistical analysis of snow field observations. The correction model applied to daily observations led to 5–37% total annual precipitation increments, growing with altitude (1740 ÷ 2190 m above sea level, a.s.l. and wind exposure. A second ‘climatological‘ correction model based on daily air temperature and wind speed was proposed, leading to errors only slightly higher than those obtained for the at-site corrections.

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  11. Feasibility of RACT for 3D dose measurement and range verification in a water phantom

    Energy Technology Data Exchange (ETDEWEB)

    Alsanea, Fahed [School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, Indiana 47907-2051 (United States); Moskvin, Vadim [Radiation Oncology, Indiana University School of Medicine, 535 Barnhill Drive, RT 041, Indianapolis, Indiana 46202-5289 (United States); Stantz, Keith M., E-mail: kstantz@purdue.edu [School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, Indiana 47907-2051 and Radiology and Imaging Sciences, Indiana University School of Medicine, 950 West Walnut Street, Indianapolis, Indiana 46202-5289 (United States)

    2015-02-15

    Purpose: The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Methods: Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). Results: The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error < 150 μm) for a 1 cGy Bragg peak dose, where the integral dose within the Bragg peak was measured to within 2%. For existing hydrophone detector sensitivities, a Bragg peak dose of 1.6 cGy is possible. Conclusions: This study demonstrates that computed tomographic scanner based on ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly

  12. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  13. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Hao, J.; Sheldon, E.

    2009-08-14

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  14. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  15. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  16. Laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors for precision linear stage metrology.

    Science.gov (United States)

    Lou, Yingtian; Yan, Liping; Chen, Benyong; Zhang, Shihua

    2017-03-20

    A laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors is proposed for precision linear stage metrology. In this interferometer, the vertical straightness error and its position are measured by interference fringe counting, the yaw and pitch errors are obtained by measuring the spacing changes of interference fringe and the horizontal straightness and roll errors are determined by laser collimation. The merit of this interferometer is that four degrees of freedom motion errors are obtained by using laser interferometry with high accuracy. The optical configuration of the proposed interferometer is designed. The principle of the simultaneous measurement of six degrees of freedom errors including yaw, pitch, roll, two straightness errors and straightness error's position of measured linear stage is depicted in detail, and the compensation of crosstalk effects on straightness error and its position measurements is presented. At last, an experimental setup is constructed and several experiments are performed to demonstrate the feasibility of the proposed interferometer and the compensation method.

  17. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or

  18. The effect of clock, media, and station location errors on Doppler measurement accuracy

    Science.gov (United States)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  19. Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.

    Science.gov (United States)

    Cotton, Sue M; Crewther, David P; Crewther, Sheila G

    2005-08-01

    The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.

  20. Effects of exposure measurement error in the analysis of health effects from traffic-related air pollution.

    Science.gov (United States)

    Baxter, Lisa K; Wright, Rosalind J; Paciorek, Christopher J; Laden, Francine; Suh, Helen H; Levy, Jonathan I

    2010-01-01

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO(2)), fine particulate matter (PM(2.5)), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R(2) (0.3 to 0.4 for the previously developed multiple regression models for PM(2.5) and NO(2)) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g., EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research.

  1. Effective reduction of the phase error for gamma nonlinearity in phase measuring profilometry by BLPF

    Science.gov (United States)

    Zhao, Xiaxia; Mo, Rong; Chang, Zhiyong; Lu, Jin

    2018-01-01

    In phase measuring profilometry, the system gamma nonlinearity makes the captured fringe patterns non-sinusoidal, which causes the computed phase to exist a non-ignorable error and seriously affects the 3D reconstruction accuracy. Based on the detailed study of the existing gamma nonlinearity compensation and phase error reduction technique, a method based on low-pass frequency domain filtering is proposed. It mainly filters out higher than one-order harmonic components induced by the gamma nonlinearity in conditions of holding as much power as possible in the power spectrum, thus improves sinusoidal waveform of the fringe images. Compared to other compensation methods, the complex mathematic model is not needed in the proposed method. The simulation and experiments confirm that the higher-order harmonic components are significantly reduced, the phase precision can be effectively improved and a certain accuracy requirement can be reached.

  2. A proposed prototype for identifying and correcting sources of measurement error in classification systems.

    Science.gov (United States)

    McKenzie, D A

    1991-06-01

    Because many raters are generally involved in the implementation of a patient classification system, interrater reliability is always a concern in the development and use of such a system. In this article, a case example is used to demonstrate a prototype for identifying measurement error introduced at each step in the classification process (assessment, creating summary item responses, and use of these responses for categorization) and to illustrate how this identification may lead to error reduction strategies. The methods of analyses included percent agreement, Kappa, and visual inspection of contingency tables displaying interrater responses to assessment items, summary items, and the placement category. The extent to which raters followed instructions was analyzed by comparing their responses with computer-generated responses across the classification steps. In addition, raters were interviewed regarding their use of the system.

  3. Indirect measurement of machine tool motion axis error with single laser tracker

    Science.gov (United States)

    Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun

    2015-02-01

    For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.

  4. Sensitivity of the diamagnetic sensor measurements of ITER to error sources and their compensation

    Energy Technology Data Exchange (ETDEWEB)

    Fresa, R., E-mail: raffaele.fresa@unibas.it [CREATE/ENEA/Euratom Association, Scuola di Ingegneria, Università della Basilicata, Potenza (Italy); Albanese, R. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Arshad, S. [Fusion for Energy (F4E), Barcelona (Spain); Coccorese, V.; Magistris, M. de; Minucci, S.; Pironti, A.; Quercia, A.; Rubinacci, G. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Vayakis, G. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Villone, F. [CREATE/ENEA/Euratom Association, Università di Cassino, Cassino (Italy)

    2015-11-15

    Highlights: • In the paper we discuss the sensitivity analysis for the measurement system of diamagnetic flux in the ITER tokamak. • Some compensation formulas have been tested to compensate the manufacturing errors, both for the sources and the sensors. • An estimation of the poloidal beta has been carried out by estimating plasma's diamagnetism. - Abstract: The present paper is focused on the sensitivity analysis of the diamagnetic sensor measurements of ITER against several kinds of error sources, with the aim of compensating them for improving the accuracy in the evaluation of the energy confinement time and poloidal beta, via Shafranov formula. The virtual values of measurements at the diamagnetic sensors were simulated by the COMPFLUX code, a numerical code able to compute the field and flux values generated in a prescribed set of output points from massive conductors and generalized filamentary currents (with an arbitrary 3D shape and a negligible cross section) in the presence of magnetic materials. The major issue to face with has been to determine the possible deformations of sensors and electromagnetic sources. The analysis has been carried out considering the following cases: -deformed sensors and ideal EM (electromagnetic) sources; -ideal sensors and perturbed EM sources; -both sensors and EM sources perturbed. As regards the compensation, several formulas have been proposed, based on the measurements carried out by the compensation coils; they basically use the value of the flux density measured to compensate the effects of the poloidal eddy currents induced in the conducting structures surrounding the plasma. The static deviation due to sensor manufacturing and positioning errors has been evaluated, and most of the pollution of the diamagnetic flux has been compensated, meeting the prescribed specifications and tolerances.

  5. Extending the range of turbidity measurement using polarimetry

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Justin S.

    2017-11-21

    Turbidity measurements are obtained by directing a polarized optical beam to a scattering sample. Scattered portions of the beam are measured in orthogonal polarization states to determine a scattering minimum and a scattering maximum. These values are used to determine a degree of polarization of the scattered portions of the beam, and concentrations of scattering materials or turbidity can be estimated using the degree of polarization. Typically, linear polarizations are used, and scattering is measured along an axis that orthogonal to the direction of propagation of the polarized optical beam.

  6. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  7. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  8. Reduction of truncation errors in partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Cano Facila, Francisco J.

    2010-01-01

    In this report, a new and effective method for reduction of truncation errors in partial spherical near-field (SNF) antenna measurements is proposed. This method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the far......-field pattern calculated from a truncated SNF measurement up to the whole forward hemisphere. The method is useful when measuring electrically large antennas and the measurement over the whole sphere is very time consuming. Therefore, a solution is considered to take samples over a portion of the spherical...... surface and then to apply the above method to reconstruct the far-field pattern. The work described in this report was carried out within the external stay of Francisco J. Cano at the Technical University of Denmark (DTU) from September 6th to December 18th in 2010....

  9. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    Science.gov (United States)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  10. Intensity autocorrelation measurements of frequency combs in the terahertz range

    Science.gov (United States)

    Benea-Chelmus, Ileana-Cristina; Rösch, Markus; Scalari, Giacomo; Beck, Mattias; Faist, Jérôme

    2017-09-01

    We report on direct measurements of the emission character of quantum cascade laser based frequency combs, using intensity autocorrelation. Our implementation is based on fast electro-optic sampling, with a detection spectral bandwidth matching the emission bandwidth of the comb laser, around 2.5 THz. We find the output of these frequency combs to be continuous even in the locked regime, but accompanied by a strong intensity modulation. Moreover, with our record temporal resolution of only few hundreds of femtoseconds, we can resolve correlated intensity modulation occurring on time scales as short as the gain recovery time, about 4 ps. By direct comparison with pulsed terahertz light originating from a photoconductive emitter, we demonstrate the peculiar emission pattern of these lasers. The measurement technique is self-referenced and ultrafast, and requires no reconstruction. It will be of significant importance in future measurements of ultrashort pulses from quantum cascade lasers.

  11. Silicon device performance measurements to support temperature range enhancement

    Science.gov (United States)

    Johnson, R. Wayne; Askew, Ray; Bromstead, James; Weir, Bennett

    1991-01-01

    The results of the NPN bipolar transistor (BJT) (2N6023) breakdown voltage measurements were analyzed. Switching measurements were made on the NPN BJT, the insulated gate bipolar transistor (IGBT) (TA9796) and the N-channel metal oxide semiconductor field effect transistor (MOSFET) (RFH75N05E). Efforts were also made to build a H-bridge inverter. Also discussed are the plans that have been made to do life testing on the devices, to build an inductive switching test circuit and to build a dc/dc switched mode converter.

  12. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao

    2016-08-01

    Full Text Available Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications.

  13. Offshore wind profiling using light detection and ranging measurements

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Hasager, Charlotte Bay; Gryning, Sven-Erik

    2009-01-01

    The advantages and limitations of the ZephlR (R), a continuous-wave, focused light detection and ranging (LiDAR) wind profiler, to observe offshore winds and turbulence characteristics were tested during a 6 month campaign at the tronsformer/platform of Hams Rev, the world's largest wind form......-derived friction velocities and roughness lengths were compared to Charnock's sea roughness model. These overage values were found to be close to the model, although the scatter of the individual estimations of sea roughness length was large. Copyright (C) 2008 John Wiley & Sons, Ltd....

  14. Measurements of Capture Efficiency of Range Hoods in Homes

    DEFF Research Database (Denmark)

    Simone, Angela; Sherman, Max H.; Walker, Iain S.

    2015-01-01

    mapped the pollution distribution in the room, and showed that the pollutants escape more at the sides of the cooktop. These preliminary results suggest that more measurements should be conducted investigating the capture efficiency at different pollutant source temperature, size and location...

  15. Backward-gazing method for heliostats shape errors measurement and calibration

    Science.gov (United States)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-06-01

    The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.

  16. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    Science.gov (United States)

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  17. Reliability, technical error of measurements and validity of length and weight measurements for children under two years old in Malaysia.

    Science.gov (United States)

    Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B

    2010-06-01

    The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity.

  18. Validation of a photography-based goniometry method for measuring joint range of motion.

    Science.gov (United States)

    Blonna, Davide; Zarkadas, Peter C; Fitzsimmons, James S; O'Driscoll, Shawn W

    2012-01-01

    A critical component of evaluating the outcomes after surgery to restore lost elbow motion is the range of motion (ROM) of the elbow. This study examined if digital photography-based goniometry is as accurate and reliable as clinical goniometry for measuring elbow ROM. Instrument validity and reliability for photography-based goniometry were evaluated for a consecutive series of 50 elbow contractures by 4 observers with different levels of elbow experience. Goniometric ROM measurements were taken with the elbows in full extension and full flexion directly in the clinic (once) and from digital photographs (twice in a blinded random manner). Instrument validity for photography-based goniometry was extremely high (intraclass correlation coefficient: extension = 0.98, flexion = 0.96). For extension and flexion measurements by the expert surgeon, systematic error was negligible (0° and 1°, respectively). Limits of agreement were 7° (95% confidence interval [CI], 5° to 9°) and -7° (95% CI, -5° to -9°) for extension and 8° (95% CI, 6° to 10°) and -7° (95% CI, -5° to -9°) for flexion. Interobserver reliability for photography-based goniometry was better than that for clinical goniometry. The least experienced observer's photographic goniometry measurements were closer to the reference measurements than the clinical goniometry measurements. Photography-based goniometry is accurate and reliable for measuring elbow ROM. The photography-based method relied less on observer expertise than clinical goniometry. This validates an objective measure of patient outcome without requiring doctor-patient contact at a tertiary care center, where most contracture surgeries are done. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  19. MEASURING THE INFLUENCE OF TASK COMPLEXITY ON HUMAN ERROR PROBABILITY: AN EMPIRICAL EVALUATION

    Directory of Open Access Journals (Sweden)

    LUCA PODOFILLINI

    2013-04-01

    Full Text Available A key input for the assessment of Human Error Probabilities (HEPs with Human Reliability Analysis (HRA methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs. In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations, and evaluates its use to represent (objectively and quantitatively task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs for which empirical evidence on the HEPs (albeit with large uncertainty and influencing factors are available – from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e.g., “easy” vs. “somewhat difficult”, while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1 the definitions of the PSFs cover the influences important for HRA (i.e., influencing the error probability, and 2 the quantitative relationships among PSFs and error

  20. Error field measurement, correction and heat flux balancing on Wendelstein 7-X

    Science.gov (United States)

    Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team

    2017-04-01

    The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m  =  1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small  ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n  =  1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m  =  5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world

  1. Quantification of error in optical coherence tomography central macular thickness measurement in wet age-related macular degeneration.

    Science.gov (United States)

    Ghazi, Nicola G; Kirk, Tyler; Allam, Souha; Yan, Guofen

    2009-07-01

    To assess error indicators encountered during optical coherence tomography (OCT) automated retinal thickness measurement (RTM) in neovascular age-related macular degeneration (NVAMD) before and after bevacizumab (Avastin; Genentech Inc, South San Francisco, California, USA) treatment. Retrospective observational cross-sectional study. Each of the 6 radial lines of a single Stratus fast macular OCT study before and 3 months following initiation of treatment in 46 eyes with NVAMD, for a total of 552 scans, was evaluated. Error frequency was analyzed relative to the presence of intraretinal, subretinal (SR), and subretinal pigment epithelial (SRPE) fluid. In scans with edge detection kernel (EDK) misplacement, manual caliper measurement of the central macular (CMT) and central foveal (CFT) thicknesses was performed and compared to the software-generated values. The frequency of the various types of error indicators, the risk factors for error, and the magnitude of automated RTM error were analyzed. Error indicators were found in 91.3% and 71.7% of eyes before and after treatment, respectively (P = .013). Suboptimal signal strength was the most common error indicator. EDK misplacement was the second most common type of error prior to treatment and the least common after treatment (P = .005). Eyes with SR or SRPE fluid were at the highest risk for error, particularly EDK misplacement (P = .039). There was a strong association between the software-generated and caliper-generated CMT and CFT measurements. The software overestimated measurements by up to 32% and underestimated them by up to 15% in the presence of SR and SRPE fluid, respectively. OCT errors are very frequent in NVAMD. SRF is associated with the highest risk and magnitude of error in automated CMT and CFT measurements. Manually adjusted measurements may be more reliable in such eyes.

  2. Research on Proximity Magnetic Field Influence in Measuring Error of Active Electronic Current Transformers

    Directory of Open Access Journals (Sweden)

    Wu Weijiang

    2016-01-01

    Full Text Available The principles of the active electronic current transformer (ECT are introduced, and the mechanism of how a proximity magnetic field can influence the measuring of errors is analyzed from the perspective of the sensor section of the ECT. The impacts on active ECTs created by three-phase proximity magnetic field with invariable distance and variable distance are simulated and analyzed. The theory and simulated analysis indicate that the active ECTs are sensitive to proximity magnetic field under certain conditions. According to simulated analysis, a product structural design and the location of transformers at substation sites are suggested for manufacturers and administration of power supply, respectively.

  3. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per...... sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons...

  4. Control of Flexible Structures: Model Errors, Robustness Measures, and Optimization of Feedback Controllers

    Science.gov (United States)

    1988-10-31

    measred r~qency resonse funcion andthCSymposium on Uynaniics and Control of Large measrenireueny rspone fncton&andtheFlexible Spacecraft, VPIESU...of large W. since the where model correction term d(t) remains virtually zero. The meaaurement-minus-estimate variance is much V - an weight matriz ...differential equations. Although the measurement error coveriance matriz . Rk’ is assumed to be known, it is strictly valid J = L.j(t).j2(t).t) (9) only for an

  5. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  6. Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations.

    Science.gov (United States)

    Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing

    2017-01-25

    Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO₂ emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30-40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO₂ and NO₂ emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.

  7. Measurement of soil water potential over an extended range by polymer tensiometers: comparison with other instruments

    Science.gov (United States)

    van der Ploeg, M. J.; Gooren, H. P.; Hoogendam, R. C.; Bakker, G.; Huiskes, C.; Koopal, L. K.; Kruidhof, H.; de Rooij, G. H.

    2007-12-01

    In water scarce areas, plant growth and productivity can be severely hampered by irregular precipitation and overall water shortage. Root water uptake is mainly driven by matric potential gradients, but measurement of soil water matric potential is limited by the measurement range of water-filled tensiometers (-0.085 MPa). Other measurement techniques indirectly measure soil water potential by converting soil water content with the use of the water retention curve. In dry soils, the water content measurements may become insensitive to small variations, and consequently this conversion may lead to large errors. We developed a polymer tensiometer (POT) that is able to measure matric potentials down to -2.0 MPa. The POT consists of a solid ceramic, a stainless steel cup and a pressure transducer. The ceramic consist of a support layer and a membrane with 2 nm pore-size to prevent polymer leakage. Between the ceramic membrane and the pressure transducer a tiny chamber is located, which contains the polymer solution. The polymer's osmotic potential strongly reduces the total water potential inside the polymer tensiometer, which causes build-up of osmotic pressure. Hence, the water in the polymer tensiometer will cavitate at a much lower matric potential than the nearly pure water in a conventional tensiometer. Direct observation of the potential of soil water at different locations in the root-system will yield knowledge about the ability of a plant to take up the water under conditions of water shortage or salinity stress. With this knowledge it will be possible to adjust existing unsaturated flow models accounting for root water uptake. We tested 8 POTs in an experimental setup, where we compared matric potential measurements to TDR water content measurements, matric potentials derived from measured water contents, and matric potentials measured by water-filled tensiometers. The experimental setup consisted of two evaporation boxes, one filled with sand (97.6% sand, 1

  8. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...

  9. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...

  10. Unreliability and error in the military's "gold standard" measure of sexual harassment by education and gender.

    Science.gov (United States)

    Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S

    2011-01-01

    The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.

  11. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation.

    Science.gov (United States)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-13

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence-steer-Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  12. Errors in shearography measurements due to the creep of the PZT shearing actuator

    Science.gov (United States)

    Zastavnik, Filip; Pyl, Lincy; Sol, Hugo; Kersemans, Mathias; Van Paepegem, Wim

    2014-08-01

    Shearography is a modern optical interferometric measurement technique. It uses the interferometric properties of coherent laser light to measure deformation gradients on the µm m - 1 level. In the most common shearography setups, the ones employing a Michelson interferometer, the deformation gradients in both the x- and y-directions can be identified by setting angles on the shearing mirror. One of the mechanisms for setting the desired shearing angles in the Michelson interferometer is using the PZT actuators. This paper will reveal that the time-dependent creep behaviour of the PZT actuators is a major source of measurement errors. Measurements at long time spans suffer severely from this creep behaviour. Even for short time spans, which are typical for shearographic experiments, the creep behaviour of the PZT shear actuator induces considerable deviation in the measured response. In this paper the mechanism and the effect of PZT creep is explored and demonstrated with measurements. For long time-span measurements in shearography, noise is a limiting factor. Thus, the time-dependent evolution of noise is considered in this paper, with particular interest in the influence of external vibrations. Measurements with and without external vibration isolation are conducted and the difference between the two setups is analyzed. At the end of the paper some recommendations are given for minimizing and correcting the here-studied time-dependent effects.

  13. Distance associated heterophoria measured with polarized Cross test of MKH method and its relationship to refractive error and age

    Directory of Open Access Journals (Sweden)

    Kříž P

    2017-03-01

    Full Text Available Pavel Kříž,1 Šárka Skorkovská1,2 1Faculty of Medicine, Department of Ophthalmology and Optometry, Masaryk University, 2Eye Clinic NeoVize Brno, Brno, Czech Republic Purpose: Due to the expansion of modern optotype liquid crystal display with the help of positive polarization, measurement of heterophorias (HTFs by means of polarization, and thus partial dissociation of perceptions, has become more and more accessible. Our aims were to establish the prevalence of distance associated HTF by measuring with polarized Cross test of MKH [measuring and correcting methodology after H-J Haase] method and its association with age and refractive error in clinical population of wide age range. Methods: A cross-sectional study was carried out with 170 clinical subjects aged 15–78 years with an average age of 40.7±16.62 years. All the participants had best-corrected visual acuity better than 20/25, stereopsis ≤60 second of arc, no heterotropia, not undergone vision therapy, and had no eye disease. The distance associated HTF was measured with the Cross test of the MKH methodology. The quantification of associated HTF was acquired by means of Risley rotary prism. Results: The occurrence of distance associated HTF was found in 71.2% of participants. Of the total, 36.5% of the cases had esophoria (EP, 9.4% EP and hyperphoria, 10.6% exophoria (XP, 7.1% XP and hyperphoria, 7.6% hyperphoria, and 28.8% orthophoria. The mean distance horizontal associated HTF was +0.76±2.38 ∆. With EP, the mean value was +2.47±2.18 ∆, and with XP, −2.1±1.72 ∆. There was no correlation observed between the amount of distance associated HTF and age. There was no effect of the type and amount of a refractive error on the amount of distance associated HTF. Conclusion: A high occurrence of distance associated HTF was revealed while performing the polarized Cross test of MKH method. The relationship between the degree of associated HTF and refractive error and age

  14. Robust Long-Range Optical Tracking for Tunneling Measurement Tasks

    Science.gov (United States)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Chmelina, Klaus; Kaufmann, Hannes

    2013-04-01

    Over the last years, automation for tunnel construction and mining activities increased rapidly. To allow for enhanced tunneling measurement, monitoring of workers and remote control of machines, systems are required that are capable of real-time positioning of several static as well as moving targets. Such a system must provide continuous and precise 3D position estimation in large volumes and must be capable to be installed and work correctly during on-going tunneling or mining tasks. Tracking systems are a fundamental component of a VR system to determine the 3D-position and orientation of a target in 3D space. Infrared optical tracking systems use infrared light to track several static or moving targets simultaneously with low latency in small tracking volumes. To benefit from the capabilities of infrared optical tracking, a system is proposed to track static as well as moving optical targets in large tracking volumes with a maximum depth extend of 70 meters. Our system needs a minimal hardware setup consisting out of two high quality machine vision cameras, which are mounted on both walls of the tunnel, and a standard (portable) workstation for data processing. Targets are equipped with infrared LEDs and can be either carried by workers or attached to a machine. The two cameras form a stereo rig and face into the measurement volume to allow for continuous tracking. Using image processing techniques, the LEDs of the target(s) are detected in both 2D camera images and are back-projected into 3D using projective reconstruction algorithms. Thereby, the 3D position estimate of the target is determined. Using image filtering techniques, fitting methods based on target's geometric constraints and prediction heuristics, the system allows for unique target identification during calibration and tracking even in environments with heavy interferences such as vibrations, tunnel illumination or machine lights. We extensively tested the system to (1) determine optimal

  15. Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band

    Science.gov (United States)

    Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.

    2018-01-01

    Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.

  16. The Importance of Tree Height in Estimating Individual Tree Biomass While Considering Errors in Measurements and Allometric Models

    OpenAIRE

    Phalla, Thuch; Ota, Tetsuji; Mizoue, Nobuya; Kajisa, Tsuyoshi; Yoshida, Shigejiro; Vuthy, Ma; Heng, Sokh

    2018-01-01

    This study evaluated the uncertainty of individual tree biomass estimated by allometric models by both including and excluding tree height independently. Using two independent sets of measurements on the same trees, the errors in the measurement of diameter at breast height and tree height were quantified, and the uncertainty of individual tree biomass estimation caused by errors in measurement was calculated. For both allometric models, the uncertainties of the individual tree biomass estima...

  17. Peer Effects and Measurement Error: The Impact of Sampling Variation in School Survey Data (Evidence from PISA)

    Science.gov (United States)

    Micklewright, John; Schnepf, Sylke V.; Silva, Pedro N.

    2012-01-01

    Investigation of peer effects on achievement with sample survey data on schools may mean that only a random sample of the population of peers is observed for each individual. This generates measurement error in peer variables similar in form to the textbook case of errors-in-variables, resulting in the estimated peer group effects in an OLS…

  18. Control chart limits based on true process capability with consideration of measurement system error

    Directory of Open Access Journals (Sweden)

    Amara Souha Ben

    2016-01-01

    Full Text Available Shewhart X̅ and R control charts and process capability indices, proven to be effective tools in statistical process control are widely used under the assumption that the measurement system is free from errors. However, measurement variability is unavoidable and may be evaluated by the measurement system discrimination ratio (DR. This paper investigates the effects of measurement system variability evaluated by DR on the process capability indices Cp and Cpm, on the expected non conforming units of product per million (ppm, on the expected mean value of the Taguchi loss function (E(Loss and on the Shewhart charts properties. It is shown that when measurement system variability is neglected, an overestimation of ppm and underestimation of E(Loss are induced. Moreover, significant effects of the measurement variability on the control chart properties were made in evidence. Therefore, control charts limits calculation methods based on process real state were developed. An example is provided in order to compare the proposed limits with those traditionally calculated for Shewhart X̅, R charts.

  19. Measuring Relativistic effects in the field of the Earth with Laser Ranged Satellites and the LARASE research program

    Science.gov (United States)

    Lucchesi, David; Anselmo, Luciano; Bassan, Massimo; Magnafico, Carmelo; Pardini, Carmen; Peron, Roberto; Pucacco, Giuseppe; Stanga, Ruggero; Visco, Massimo

    2017-04-01

    The main goal of the LARASE (LAser RAnged Satellites Experiment) research program is to obtain refined tests of Einstein's theory of General Relativity (GR) by means of very precise measurements of the round-trip time among a number of ground stations of the International Laser Ranging Service (ILRS) network and a set of geodetic satellites. These measurements are guaranteed by means of the powerful and precise Satellite Laser Ranging (SLR) technique. In particular, a big effort of LARASE is dedicated to improve the dynamical models of the LAGEOS, LAGEOS II and LARES satellites, with the objective to obtain a more precise and accurate determination of their orbit. These activities contribute to reach a final error budget that should be robust and reliable in the evaluation of the main systematic errors sources that come to play a major role in masking the relativistic precession on the orbit of these laser-ranged satellites. These error sources may be of gravitational and non-gravitational origin. It is important to stress that a more accurate and precise orbit determination, based on more reliable dynamical models, represents a fundamental prerequisite in order to reach a sub-mm precision in the root-mean-square of the SLR range residuals and, consequently, to gather benefits in the fields of geophysics and space geodesy, such as stations coordinates knowledge, geocenter determination and the realization of the Earth's reference frame. The results reached over the last year will be presented in terms of the improvements achieved in the dynamical model, in the orbit determination and, finally, in the measurement of the relativistic precessions that act on the orbit of the satellites considered.

  20. Errors in measurement of three-dimensional motions of the stapes using a laser Doppler vibrometer system.

    Science.gov (United States)

    Sim, Jae Hoon; Lauxmann, Michael; Chatzimichalis, Michail; Röösli, Christof; Eiber, Albrecht; Huber, Alexander M

    2010-12-01

    Previous studies have suggested complex modes of physiological stapes motions based upon various measurements. The goal of this study was to analyze the detailed errors in measurement of the complex stapes motions using laser Doppler vibrometer (LDV) systems, which are highly sensitive to the stimulation intensity and the exact angulations of the stapes. Stapes motions were measured with acoustic stimuli as well as mechanical stimuli using a custom-made three-axis piezoelectric actuator, and errors in the motion components were analyzed. The ratio of error in each motion component was reduced by increasing the magnitude of the stimuli, but the improvement was limited when the motion component was small relative to other components. This problem was solved with an improved reflectivity on the measurement surface. Errors in estimating the position of the stapes also caused errors on the coordinates of the measurement points and the laser beam direction relative to the stapes footplate, thus producing errors in the 3-D motion components. This effect was small when the position error of the stapes footplate did not exceed 5 degrees. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    Science.gov (United States)

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  2. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump

    Science.gov (United States)

    Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K

    2016-01-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900

  3. Assessment of long-range kinematic GPS positioning errors by comparison with airborne laser altimetry and satellite altimetry

    DEFF Research Database (Denmark)

    Zhang, X.H.; Forsberg, René

    2007-01-01

    Long-range airborne laser altimetry and laser scanning (LIDAR) or airborne gravity surveys in, for example, polar or oceanic areas require airborne kinematic GPS baselines of many hundreds of kilometers in length. In such instances, with the complications of ionospheric biases, it can be a real c...

  4. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere

    Science.gov (United States)

    Vidovič, Luka; Majaron, Boris

    2013-03-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS), in which spectrally broad illumination light is multiply scattered and homogenized. The measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light at the signal output port to account for illumination field. After replacing the white standard with test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, because test samples are invariably less reflective than the white standard, such a substitution modifies the illumination field inside the IS. This leads to underestimation of the sample's reflectivity and distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of much more complex dual-beam experimental setups, involving dedicated IS, literature states that only approximate corrections of SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical way to eliminate the SBSE using IS equipped with an additional "reference" output port. Two additional measurements performed at this port (of the white standard and sample, respectively) namely enable an accurate compensation for above described alteration of the illumination field. In addition, we analyze the dependency of SBSE on sample reflectivity and illustrate its impact on measurements of DRS in human skin with a typical IS.

  5. The quantification and correction of wind-induced precipitation measurement errors

    Science.gov (United States)

    Kochendorfer, John; Rasmussen, Roy; Wolff, Mareile; Baker, Bruce; Hall, Mark E.; Meyers, Tilden; Landolt, Scott; Jachcik, Al; Isaksen, Ketil; Brækkan, Ragnar; Leeper, Ronald

    2017-04-01

    Hydrologic measurements are important for both the short- and long-term management of water resources. Of the terms in the hydrologic budget, precipitation is typically the most important input; however, measurements of precipitation are subject to large errors and biases. For example, an all-weather unshielded weighing precipitation gauge can collect less than 50 % of the actual amount of solid precipitation when wind speeds exceed 5 m s-1. Using results from two different precipitation test beds, such errors have been assessed for unshielded weighing gauges and for weighing gauges employing four of the most common windshields currently in use. Functions to correct wind-induced undercatch were developed and tested. In addition, corrections for the single-Alter weighing gauge were developed using the combined results of two separate sites in Norway and the USA. In general, the results indicate that the functions effectively correct the undercatch bias that affects such precipitation measurements. In addition, a single function developed for the single-Alter gauges effectively decreased the bias at both sites, with the bias at the US site improving from -12 to 0 %, and the bias at the Norwegian site improving from -27 to -4 %. These correction functions require only wind speed and air temperature as inputs, and were developed for use in national and local precipitation networks, hydrological monitoring, roadway and airport safety work, and climate change research. The techniques used to develop and test these transfer functions at more than one site can also be used for other more comprehensive studies, such as the World Meteorological Organization Solid Precipitation Intercomparison Experiment (WMO-SPICE).

  6. Right and left correlation of retinal vessel caliber measurements in anisometropic children: effect of refractive error.

    Science.gov (United States)

    Joachim, Nichole; Rochtchina, Elena; Tan, Ava Grace; Hong, Thomas; Mitchell, Paul; Wang, Jie Jin

    2012-08-07

    Previous studies have reported high right-left eye correlation in retinal vessel caliber. We test the hypothesis that right-left correlation in retinal vessel caliber would be reduced in anisometropic compared with emmetropic children. Retinal arteriolar and venular calibers were measured in 12-year-old children. Three groups were selected: group 1, both eyes emmetropic (n = 214); group 2, right-left spherical equivalent refraction (SER) difference ≥1.00 but right-left SER difference ≥2.00 D (n = 32). Pearson's correlations between the two eyes were compared between group 1 and group 2 or 3. Associations between right-left difference in refractive error and right-left difference in caliber measurements were assessed using linear regression models. Right-left correlation in group 1 was 0.57 for central retinal arteriolar equivalent (CRAE) and 0.70 for central retinal venular equivalent (CRVE) compared with 0.60 and 0.82 for CRAE and CRVE, respectively, in group 2 (P = 0.42 and P = 0.08), and 0.36 and 0.52, respectively, in group 3 (P = 0.08 and P = 0.07, referenced to group 1). Each 1.00-D increase in right-left SER difference was associated with a 0.74-μm increase in mean CRAE difference (P = 0.02) and a 1.23-μm increase in mean CRVE difference between the two eyes (P = 0.002). Each 0.1-mm increase in right-left difference in axial length was associated with a 0.21-μm increase in the mean difference in CRAE (P = 0.01) and a 0.42-μm increase in the mean difference in CRVE (P < 0.0001) between the two eyes. Refractive error ≥2.00 D may contribute to variation in measurements of retinal vessel caliber.

  7. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  8. Long-range measurement system using ultrasonic range sensor with high-power transmitter array in air.

    Science.gov (United States)

    Kumar, Sahdev; Furuhashi, Hideo

    2017-02-01

    A long-range measurement system comprising an ultrasonic range sensor with a high-power ultrasonic transmitter array in air was investigated. The system is simple in construction and can be used under adverse conditions such as fog, rain, darkness, and smoke. However, due to ultrasonic waves are well absorbed by air molecules, the measurable range is limited to a few meters. Therefore, we developed a high-power ultrasonic transmitter array consisting of 144 transmitting elements. All elements are arranged in the form of a 12×12 array pattern. The sound pressure level at 5m from the transmitter array was >30dB higher than that of a single element. A measuring range of over 25m was achieved using this transmitter array in conjunction with a receiver array having 32 receiving elements. The characteristics of the transmitter array and range sensor system are discussed by comparing simulation and experimental results. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    Science.gov (United States)

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  10. Measured Response of Local, Mid-range and Far-range Discontinuities of Large Metal Groundplanes using Time Domain Techniques

    Directory of Open Access Journals (Sweden)

    T. Schrader

    2005-01-01

    Full Text Available This work describes a method to detect and to quantify any local or mid-range discontinuity on extended flat metal planes. Often these planes are used for antenna calibration (open area test site - OATS or the plane could be the ground of a semi-anechoic chamber used in Electromagnetic Compatibility (EMC testing. The measurement uncertainty of antenna calibration or EMC testing depends on the groundplane's quality, which can be accessed using this method. A vector network analyzer with time-domain option is used to determine the complex-valued input scattering parameter S11,F of an aperture antenna in a monostatic setup. S11,F contains the information desired about the discontinuities and is measured in the frequency domain with high dynamic range. But only after a linear filtering utilizing the Chirp-Z-Transform the obtained time-domain signal S11,T evidence of local and mid-range discontinuities.

  11. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  12. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  13. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2017-11-29

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Method to resolve microphone and sample location errors in the two-microphone duct measurement method

    Science.gov (United States)

    Katz

    2000-11-01

    Utilizing the two-microphone impedance tube method, the normal incidence acoustic absorption and acoustic impedance can be measured for a given sample. This method relies on the measured transfer function between two microphones, and the knowledge of their precise location relative to each other and the sample material. In this article, a method is proposed to accurately determine these locations. A third sensor is added at the end of the tube to simplify the measurement. First, a justification and investigation of the method is presented. Second, reference terminations are measured to evaluate the accuracy of the apparatus. Finally, comparisons are made between the new method and current methods for determining these distances and the variations are discussed. From this, conclusions are drawn with regards to the applicability and need for the new method and under which circumstances it is applicable. Results show that the method provides a reliable determination of both microphone locations, which is not possible using the current techniques. Errors due to inaccurate determinination of these parameters between methods were on the order of 3% for R and 12% for Re Z.

  15. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    Directory of Open Access Journals (Sweden)

    Julio Illade-Quinteiro

    2015-02-01

    Full Text Available Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution.

  16. Perceptual, durational and tongue displacement measures following articulation therapy for rhotic sound errors.

    Science.gov (United States)

    Bressmann, Tim; Harper, Susan; Zhylich, Irina; Kulkarni, Gajanan V

    2016-01-01

    Outcomes of articulation therapy for rhotic errors are usually assessed perceptually. However, our understanding of associated changes of tongue movement is limited. This study described perceptual, durational and tongue displacement changes over 10 sessions of articulation therapy for /ɹ/ in six children. Four of the participants also received ultrasound biofeedback of their tongue shape. Speech and tongue movement were recorded pre-therapy, after 5 sessions, in the final session and at a one month follow-up. Perceptually, listeners perceived improvement and classified more productions as /ɹ/ in the final and follow-up assessments. The durations of VɹV syllables at the midway point of the therapy were longer. Cumulative tongue displacement increased in the final session. The average standard deviation was significantly higher in the middle and final assessments. The duration and tongue displacement measures illustrated how articulation therapy affected tongue movement and may be useful for outcomes research about articulation therapy.

  17. A numerical algorithm to propagate navigation error covariance matrices associated with generalized strapdown inertial measurement units

    Science.gov (United States)

    Weir, Kent A.; Wells, Eugene M.

    1990-01-01

    The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.

  18. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  19. Reduction of truncation errors in planar, cylindrical, and partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel

    2012-01-01

    A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...... hemisphere. The extension of the valid region is achieved by the iterative application of atransformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave...... spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar...

  20. Effect of age, sex, and refractive errors on central corneal thickness measured by Oculus Pentacam®

    Directory of Open Access Journals (Sweden)

    Hashmani N

    2017-06-01

    Full Text Available Nauman Hashmani,1 Sharif Hashmani,1 Azfar N Hanfi,1 Misbah Ayub,2 Choudhry M Saad,2 Hina Rajani,2 Marium G Muhammad,2 Misbahul Aziz1 1Department of Ophthalmology, Hashmanis Hospital, Karachi, Pakistan; 2Dow Medical College, Karachi, Pakistan Background: Central corneal thickness (CCT can be used to assess the corneal physiological condition as well as the pathological changes associated with ocular diseases. It has an influence on the measurement of intraocular pressure and is being used as a screening tool for refractive surgery candidates. The aim of this study was to determine the median CCT among normal Pakistani population and to correlate CCT with age, sex, and refractive errors.Methods: We conducted a retrospective analysis of 5,171 healthy eyes in 2,598 patients who came to Hashmanis Hospital, Karachi, Pakistan. The age of the patients ranged from 6 to 70 years. The refractive error was gauged by an auto-refractometer, and CCT was measured using Oculus Pentacam®.Results: The median CCT of our study was 541.0 µm with an interquartile range (IQR of 44.0 µm. The median age was 26.0 years (IQR: 8.0. Median spherical equivalent (SE of the patients was −4.3 D (IQR: 3.3 with the median sphere value as −4.0 D (IQR: 3.8. Lastly, the median cylinder was −1.0 D (IQR: 1.3. Age has a weak negative correlation with CCT (r=−0.058 and shows statistical significance (P<0.001. Additionally, males had thinner CCT readings than females (P=0.001. The cylinder values, on the other hand, had a significant (P=0.004 and positive correlation (r=0.154. Three values showed no significant correlation: sphere (P=0.100, SE (P=0.782, and the left or right eye (P=0.151.Conclusion: Among the Pakistani population, CCT was significantly affected by three variables: sex, age, and cylinder. No relationship of CCT was observed with the left or right eye, sphere, and SE. Keywords: refractive surgery, glaucoma, topography, sex, refractive errors, astigmatism

  1. Analysis of misclassified correlated binary data using a multivariate probit model when covariates are subject to measurement error.

    Science.gov (United States)

    Roy, Surupa; Banerjee, Tathagata

    2009-06-01

    A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and/or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.

  2. Performance Analysis of ToA-Based Positioning Algorithms for Static and Dynamic Targets with Low Ranging Measurements.

    Science.gov (United States)

    Ferreira, André G; Fernandes, Duarte; Catarino, André P; Monteiro, João L

    2017-08-19

    Indoor Positioning Systems (IPSs) for emergency responders is a challenging field attracting researchers worldwide. When compared with traditional indoor positioning solutions, the IPSs for emergency responders stand out as they have to operate in harsh and unstructured environments. From the various technologies available for the localization process, ultra-wide band (UWB) is a promising technology for such systems due to its robust signaling in harsh environments, through-wall propagation and high-resolution ranging. However, during emergency responders' missions, the availability of UWB signals is generally low (the nodes have to be deployed as the emergency responders enter a building) and can be affected by the non-line-of-sight (NLOS) conditions. In this paper, the performance of four typical distance-based positioning algorithms (Analytical, Least Squares, Taylor Series, and Extended Kalman Filter methods) with only three ranging measurements is assessed based on a COTS UWB transceiver. These algorithms are compared based on accuracy, precision and root mean square error (RMSE). The algorithms were evaluated under two environments with different propagation conditions (an atrium and a lab), for static and mobile devices, and under the human body's influence. A NLOS identification and error mitigation algorithm was also used to improve the ranging measurements. The results show that the Extended Kalman Filter outperforms the other algorithms in almost every scenario, but it is affected by the low measurement rate of the UWB system.

  3. The Errors Caused by Test Site Configuration at the Radiated Emission Measurement

    Directory of Open Access Journals (Sweden)

    Miki Bittera

    2004-01-01

    Full Text Available Nowadays, it is very important to know and to keep uncertainty of EMC measurements at low value to ensure the comparability of measurement results from different laboratories. This paper deals with analysis of uncertainties caused by improper test site configuration - especially by receiving antenna positioning. The analysis is performed at frequency range witch biconical broadband antenna works in and it is based on measurements. Nevertheless, it can be more simple to get results using theoretical analysis, but is does not include the test site properties.

  4. Reducing the impact of measurement errors in FRF-based substructure decoupling using a modal model

    Science.gov (United States)

    Peeters, P.; Manzato, S.; Tamarozzi, T.; Desmet, W.

    2018-01-01

    As the vibro-acoustic requirements of modern products become more stringent, the need for robust identification methods increases proportionally. Sometimes the identification of a component is greatly complicated by the presence of a supporting structure that cannot be removed during testing. This is where substructure decoupling finds its main applications. However, despite some recent advances in substructure decoupling, the number of successful applications has so far been limited. The main reason for this is the poor conditioning of the problem that tends to amplify noise and other measurement errors. This paper proposes a new approach that uses a modal model to filter the experimental frequency response functions (FRFs). This can reduce the impact of noise and mass loading considerably for decoupling applications and decrease the quality requirements for experimental data. Furthermore, based on the uncertainty of the observed eigenfrequencies, an arbitrary number of consistent (all FRFs exhibit exactly the same poles) FRF matrices can be generated that are all contained within the variation of the original measurement. This way, the variation that is observed within the measurement is taken into account. The result is a distribution of decoupled FRFs of which the average can be used as the decoupled FRF set while the spread on the results highlights the sensitivity or reliability of the obtained results. After briefly reintroducing the theory of FRF-based substructure decoupling, the main problems in decoupling are summarized. Afterwards, the new methodology is presented and tested on both numerical and experimental cases.

  5. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    Science.gov (United States)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  6. Valuing urban open space using the travel-cost method and the implications of measurement error.

    Science.gov (United States)

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  8. Cloud cover detection combining high dynamic range sky images and ceilometer measurements

    Science.gov (United States)

    Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.

    2017-11-01

    This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.

  9. Prediction of rainfall intensity measurement errors using commercial microwave communication links

    Directory of Open Access Journals (Sweden)

    A. Zinevich

    2010-10-01

    Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.

    In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.

    The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.

    The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple

  10. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  11. Multicollinearity and Measurement Error in Structural Equation Models: Implications for Theory Testing

    OpenAIRE

    Rajdeep Grewal; Joseph A. Cote; Hans Baumgartner

    2004-01-01

    The literature on structural equation models is unclear on whether and when multicollinearity may pose problems in theory testing (Type II errors). Two Monte Carlo simulation experiments show that multicollinearity can cause problems under certain conditions, specifically: (1) when multicollinearity is extreme, Type II error rates are generally unacceptably high (over 80%), (2) when multicollinearity is between 0.6 and 0.8, Type II error rates can be substantial (greater than 50% and frequent...

  12. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations....

  13. Range Measurements of keV Hydrogen Ions in Solid Oxygen and Carbon Monoxide

    DEFF Research Database (Denmark)

    Schou, Jørgen; Sørensen, H.; Andersen, H.H.

    1984-01-01

    Ranges of 1.3–3.5 keV/atom hydrogen and deuterium molecular ions have been measured by a thin-film reflection method. The technique, used here for range measurements in solid oxygen and carbon monoxide targets, is identical to the one used previously for range measurements in hydrogen and nitrogen....... The main aim was to look for phase-effects, i.e. gas-solid differences in the stopping processes. While measured ranges in solid oxygen were in agreement with known gas data, the ranges in solid carbon monoxide were up to 50% larger than those calculated from gas-stopping data. The latter result agrees...

  14. Communication system features dual mode range acquisition plus time delay measurement

    Science.gov (United States)

    Atwood, S. W.; Kline, A. W., Jr.; Welter, N. E.

    1968-01-01

    Communication system combines range acquisition system and time measurement system for tracking high velocity aircraft and spacecraft. The range acquisition system uses a pseudonoise code to determine range and the time measurement system reduces uncontrolled phase variations in the demodulated signal.

  15. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    Science.gov (United States)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  16. Validation of an in-vivo proton beam range check method in an anthropomorphic pelvic phantom using dose measurements.

    Science.gov (United States)

    Bentefour, El H; Tang, Shikui; Cascio, Ethan W; Testa, Mauro; Samuel, Deepak; Prieels, Damien; Gottschalk, Bernard; Lu, Hsiao-Ming

    2015-04-01

    requirements, WEPL accuracy and minimum dose, necessary for clinical use, thus, its potential for in-vivo protons range verification. Further development is needed, namely, devising a workflow that takes into account the limits imposed by proton range mixing and the susceptibility of the comparison of measured and expected WEPLs to errors on the detector positions. The methods may also be used for in-vivo dosimetry and could benefit various proton therapy treatments.

  17. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  18. A Measurement Error Model for Physical Activity Level as Measured by a Questionnaire With Application to the 1999–2006 NHANES Questionnaire

    OpenAIRE

    Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S

    2013-01-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was adminis...

  19. Comparison of B-Spline Model and Iterated Conditional Modes (ICM) For Data With Measurement Error (ME)

    Science.gov (United States)

    Hartatik; Purnomo, Agus

    2017-06-01

    Direct observation results are often used to review the estimation model. However, actual data observation findings still need to be re-examined, because of measurement error factors (ME). In the regression modeling if X is a random variable with Measurement Error then the complicated calculation will not loose from application of Computer and Technology. As is the case for a review of the following model estimation, given data (Xi, Yi), then the regression model is Y i = g(X i ) + ɛ i where Xi is the element i from the predictor variables X and Yi is the element i of the response variable Y. The variable X is the predictor variables From the findings specific observations usually are constants, but generally found X which is a random variable variable or where Fixed value is not constant. In this case is called the regression model Regression Model with the measurement Errors. Purpose of this research are estimated nonparametric model approach with B-Spline Method to review regression with Measurement Errors are ignored and methods Iterative Conditional Mode (ICM) for review Model regression with measurement error.

  20. Analysis of the sources of error in the determination of sound power based on sound intensity measurements

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Jacobsen, Finn

    2010-01-01

    the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...

  1. A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13

    Science.gov (United States)

    Holdzkom, David; Sumner, Brian; McMillen, Brad

    2010-01-01

    In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…

  2. Micro-Viscometer for Measuring Shear-Varying Blood Viscosity over a Wide-Ranging Shear Rate

    Science.gov (United States)

    Kim, Byung Jun; Lee, Seung Yeob; Jee, Solkeun; Atajanov, Arslan; Yang, Sung

    2017-01-01

    In this study, a micro-viscometer is developed for measuring shear-varying blood viscosity over a wide-ranging shear rate. The micro-viscometer consists of 10 microfluidic channel arrays, each of which has a different micro-channel width. The proposed design enables the retrieval of 10 different shear rates from a single flow rate, thereby enabling the measurement of shear-varying blood viscosity with a fixed flow rate condition. For this purpose, an optimal design that guarantees accurate viscosity measurement is selected from a parametric study. The functionality of the micro-viscometer is verified by both numerical and experimental studies. The proposed micro-viscometer shows 6.8% (numerical) and 5.3% (experimental) in relative error when compared to the result from a standard rotational viscometer. Moreover, a reliability test is performed by repeated measurement (N = 7), and the result shows 2.69 ± 2.19% for the mean relative error. Accurate viscosity measurements are performed on blood samples with variations in the hematocrit (35%, 45%, and 55%), which significantly influences blood viscosity. Since the blood viscosity correlated with various physical parameters of the blood, the micro-viscometer is anticipated to be a significant advancement for realization of blood on a chip. PMID:28632151

  3. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    CERN Document Server

    Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A

    2016-01-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....

  4. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    Science.gov (United States)

    Sweeney, R. M.; Frassinetti, L.; Brunsell, P.; Fridström, R.; Volpe, F. A.

    2016-12-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of m/n  =  1/-12, where m and n are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the modified Rutherford equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e.g. based on the best EF-cancelling RMP, resulting in the fastest TM rotation). A passive variant of this technique is also presented, where no RMPs are applied, and the EF phase is deduced.

  5. Reliability of two goniometric methods of measuring active inversion and eversion range of motion at the ankle

    Directory of Open Access Journals (Sweden)

    Refshauge Kathryn M

    2006-07-01

    Full Text Available Abstract Background Active inversion and eversion ankle range of motion (ROM is widely used to evaluate treatment effect, however the error associated with the available measurement protocols is unknown. This study aimed to establish the reliability of goniometry as used in clinical practice. Methods 30 subjects (60 ankles with a wide variety of ankle conditions participated in this study. Three observers, with different skill levels, measured active inversion and eversion ankle ROM three times on each of two days. Measurements were performed with subjects positioned (a sitting and (b prone. Intra-class correlation coefficients (ICC[2,1] were calculated to determine intra- and inter-observer reliability. Results Within session intra-observer reliability ranged from ICC[2,1] 0.82 to 0.96 and between session intra-observer reliability ranged from ICC[2,1] 0.42 to 0.80. Reliability was similar for the sitting and the prone positions, however, between sessions, inversion measurements were more reliable than eversion measurements. Within session inter-observer measurements in sitting were more reliable than in prone and inversion measurements were more reliable than eversion measurements. Conclusion Our findings show that ankle inversion and eversion ROM can be measured with high to very high reliability by the same observer within sessions and with low to moderate reliability by different observers within a session. The reliability of measures made by the same observer between sessions varies depending on the direction, being low to moderate for eversion measurements and moderate to high for inversion measurements in both positions.

  6. Reliability of two goniometric methods of measuring active inversion and eversion range of motion at the ankle.

    Science.gov (United States)

    Menadue, Collette; Raymond, Jacqueline; Kilbreath, Sharon L; Refshauge, Kathryn M; Adams, Roger

    2006-07-28

    Active inversion and eversion ankle range of motion (ROM) is widely used to evaluate treatment effect, however the error associated with the available measurement protocols is unknown. This study aimed to establish the reliability of goniometry as used in clinical practice. 30 subjects (60 ankles) with a wide variety of ankle conditions participated in this study. Three observers, with different skill levels, measured active inversion and eversion ankle ROM three times on each of two days. Measurements were performed with subjects positioned (a) sitting and (b) prone. Intra-class correlation coefficients (ICC[2,1]) were calculated to determine intra- and inter-observer reliability. Within session intra-observer reliability ranged from ICC[2,1] 0.82 to 0.96 and between session intra-observer reliability ranged from ICC[2,1] 0.42 to 0.80. Reliability was similar for the sitting and the prone positions, however, between sessions, inversion measurements were more reliable than eversion measurements. Within session inter-observer measurements in sitting were more reliable than in prone and inversion measurements were more reliable than eversion measurements. Our findings show that ankle inversion and eversion ROM can be measured with high to very high reliability by the same observer within sessions and with low to moderate reliability by different observers within a session. The reliability of measures made by the same observer between sessions varies depending on the direction, being low to moderate for eversion measurements and moderate to high for inversion measurements in both positions.

  7. Reliability of the spin-T cervical goniometer in measuring cervical range of motion in an asymptomatic Indian population.

    Science.gov (United States)

    Agarwal, Shabnam; Allison, Garry T; Singer, Kevin P

    2005-09-01

    To examine the intratester reliability of the Spin-T goniometer, a cervical range of motion device, in a normal Indian population. Subjects comprised 30 healthy adults with mean age of 34 years (range, 18-65 years). The subjects were stabilized in the sitting position and the Spin-T goniometer mounted on the head of the subject. The study design was a within-subject repeated intratester reliability trial conducted for cervical range of motion in 6 directions of movement. Three measurements were taken in each direction (flexion, extension lateral flexion, and lateral rotation) per participant. Reliability coefficients, intraclass correlation coefficients, and 95% confidence interval were derived from repeated-measures analysis of variance (ANOVA). Where differences in ANOVA were detected, a paired t test was conducted and the typical error values and coefficient of variance were calculated. All repeated measures showed high intraclass correlation coefficients (all >0.96, P goniometer proved to be a reliable measuring instrument for cervical range of movement in an Indian population. The use of a laser pointer fixed to the instrument ensured a consistent neutral start position.

  8. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  9. Weightbearing and nonweightbearing ankle dorsiflexion range of motion: are we measuring the same thing?

    Science.gov (United States)

    Rabin, Alon; Kozol, Zvi

    2012-01-01

    Ankle dorsiflexion range of motion has been measured in weightbearing and nonweightbearing conditions. The different measurement conditions may contribute to inconsistent conclusions regarding the role of ankle dorsiflexion in several pathologic conditions. The purpose of this study was to examine the relationship between ankle dorsiflexion range of motion as measured in weightbearing and nonweightbearing conditions. We compared ankle dorsiflexion range of motion as measured in a weightbearing versus a nonweightbearing position in 43 healthy volunteers. Measurements were taken separately by two examiners. Weightbearing and nonweightbearing ankle dorsiflexion measurements produced significantly different results (P dorsiflexion measurements produce significantly different results and only a moderate correlation, suggesting that these two measurements should not be used interchangeably as measures of ankle dorsiflexion range of motion.

  10. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  11. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  12. Reliability of digital compass goniometer in knee joint range of motion measurement.

    Science.gov (United States)

    Yaikwawongs, Nammond; Limpaphayom, Noppachart; Wilairatana, Vajara

    2009-04-01

    To compare the reliability of range of motion measurement in the knee joint using a digital compass goniometer combined with inclinometer with standard range of motion measurement from roentgenographic picture. Range of flexion and extension of the knee joint in volunteer participants was measured by the newly developed digital compass goniometer combined with inclinometer (DCG). The results were compared with range of knee joint motion obtained from standard roentgenographic picture by intraclass correlation coefficient. Range of motion of knee joint measured by DCG correlated very well with the data obtained from standard knee roentgenographic picture. The intraclass correlation coefficient equals 0.973. The digital compass goniometer was a reliable tool to measure knee joint range of motion in flexion and extension plane.

  13. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  14. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  15. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    Science.gov (United States)

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission

  16. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    Science.gov (United States)

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  17. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  18. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  19. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  20. Spatio-Temporal Error Sources Analysis and Accuracy Improvement in Landsat 8 Image Ground Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Chao Ding

    2016-11-01

    Full Text Available Because of the advantages of low cost, large coverage and short revisit cycle, Landsat 8 images have been widely applied to monitor earth surface movements. However, there are few systematic studies considering the error source characteristics or the improvement of the deformation field accuracy obtained by Landsat 8 image. In this study, we utilize the 2013 Mw 7.7 Balochistan, Pakistan earthquake to analyze error spatio-temporal characteristics and elaborate how to mitigate error sources in the deformation field extracted from multi-temporal Landsat 8 images. We found that the stripe artifacts and the topographic shadowing artifacts are two major error components in the deformation field, which currently lack overall understanding and an effective mitigation strategy. For the stripe artifacts, we propose a small spatial baseline (<200 m method to avoid the stripe artifacts effect on the deformation field. We also propose a small radiometric baseline method to reduce the topographic shadowing artifacts and radiometric decorrelation noises. Those performances and accuracy evaluation show that these two methods are effective in improving the precision of deformation field. This study provides the possibility to detect subtle ground movement with higher precision caused by earthquake, melting glaciers, landslides, etc., with Landsat 8 images. It is also a good reference for error source analysis and corrections in deformation field extracted from other optical satellite images.

  1. Program to perform research on use of lidar for range resolved turbulence measurements

    Science.gov (United States)

    Moskowitz, Warren P.; Garner, Richard C.

    1989-11-01

    The design of a lidar system capable of measuring remotely range resolved atmospheric turbulence is presented. The connection between the measured quantities and the accepted turbulence strength parameter (C sub n)-sq is developed theoretically. Simulations of an operating system were made, and the results provide a measure of system capability. A typical value for (C sub n)-sq of 10(exp -16) m to the -2/3 power at 3 km vertical range is measurable with a 200 m range resolution.

  2. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently

  3. Analysis of the Largest Normalized Residual Test Robustness for Measurements Gross Errors Processing in the WLS State Estimator

    Directory of Open Access Journals (Sweden)

    Breno Carvalho

    2013-10-01

    Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.

  4. The Use of PCs, Smartphones, and Tablets in a Probability-Based Panel Survey : Effects on Survey Measurement Error

    NARCIS (Netherlands)

    Lugtig, Peter; Toepoel, Vera

    2016-01-01

    Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using

  5. Neck range of motion measurements using a new three-dimensional motion analysis system: validity and repeatability.

    Science.gov (United States)

    Inokuchi, Haruhi; Tojima, Michio; Mano, Hiroshi; Ishikawa, Yuki; Ogata, Naoshi; Haga, Nobuhiko

    2015-12-01

    Neck movement is important for many activities of daily living (ADL). Neck disorders, such as cervical spondylosis and whiplash can limit neck movement and ADL. The cervical range of motion (CROM) device has been recently used to measure neck range of motion (ROM); however, this measurement includes trunk motion, and therefore does not represent a pure neck ROM measurement. The authors aimed to develop a new method to establish pure neck ROM measurements during flexion, extension, lateral bending, and rotation using a three-dimensional motion analysis system, VICON. Twelve healthy participants were recruited and neck ROMs during flexion, extension, lateral bending, and rotation were measured using VICON and the CROM device. Test-retest repeatability was assessed using interclass correlation coefficients (ICCs), standard error of measurement (SEM), and minimal detectable change (MDC). Validity between two measurements was evaluated using a determination coefficient and Pearson's correlation coefficient. ICCs of neck ROM measured using VICON and the CROM device were all at substantial or almost perfect levels [VICON: ICC(1,2) = 0.786-0.962, the CROM device: ICC(1,2) = 0.736-0.950]. Both SEMs and MDCs were low in all measurement directions (VICON: SEM = 1.3°-4.5°, MDC = 3.6°-12.5°; the CROM device: SEM = 2.2°-3.9°, MDC = 6.1°-10.7°). Determination coefficients (R(2)s) and Pearson's correlation coefficients (rs) between the two measurement methods were high (R(2) = 0.607-0.745, r = 0.779-0.863). VICON is a useful system to measure neck ROMs and evaluate the efficacy of interventions, such as surgery or physiotherapeutic exercise.

  6. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  7. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  8. Recruitment into diabetes prevention programs: what is the impact of errors in self-reported measures of obesity?

    Science.gov (United States)

    Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A

    2012-07-08

    Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.

  9. Recruitment into diabetes prevention programs: what is the impact of errors in self-reported measures of obesity?

    Directory of Open Access Journals (Sweden)

    Hernan Andrea

    2012-07-01

    Full Text Available Abstract Background Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP. Methods The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004–05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC and Body Mass Index (BMI from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Results Overall, calculation errors impacted on eligibility in 18 cases (2.1%. Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC in self-report were found in n = 90 (32.3%. These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = −0.226, p-value  Conclusions Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.

  10. MEASUREMENT ERROR EFFECT ON THE POWER OF THE CONTROL CHART FOR ZERO-TRUNCATED BINOMIAL DISTRIBUTION UNDER STANDARDIZATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2014-12-01

    Full Text Available Measurement error effect on the power of control charts for zero truncated Poisson distribution and ratio of two Poisson distributions are recently studied by Chakraborty and Khurshid (2013a and Chakraborty and Khurshid (2013b respectively. In this paper, in addition to the expression for the power of control chart for ZTBD based on standardized normal variate is obtained, numerical calculations are presented to see the effect of errors on the power curve. To study the sensitivity of the monitoring procedure, average run length (ARL is also considered.

  11. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna...... is reliable only within a certain portion of the visible region. Accordingly, the truncation error is reduced by extrapolating the remaining portion of the visible region by the Gerchberg-Papoulis iterative algorithm, exploiting a condition of spatial concentration of the fields on the antenna aperture plane...

  12. Test-retest reproducibility of elbow goniometric measurements in a rigid double-blinded protocol: intervals for distinguishing between measurement error and clinical change.

    Science.gov (United States)

    Cleffken, Berry; van Breukelen, Gerard; van Mameren, Henk; Brink, Peter; Olde Damink, Steven

    2007-01-01

    Increasingly, goniometry of elbow motion is used for qualification of research results. Expression of reliability is in parameters not suitable for comparison of results. We modified Bland and Altman's method, resulting in the smallest detectable differences (SDDs). Two raters measured elbow excursions in 42 individuals (144 ratings per test person) with an electronic digital inclinometer in a classical test-retest crossover study design. The SDDs were 0 +/- 4.2 degrees for active extension; 0 +/- 8.2 degrees for active flexion, both without upper arm fixation; 0 +/- 6.3 degrees for active extension; 0 +/- 5.7 degrees for active flexion; 0 +/- 7.4 degrees for passive flexion with upper arm fixation; 0 +/- 10.1 degrees for active flexion with upper arm retroflexion; and 0 +/- 8.5 degrees and 0 +/- 10.8 degrees for active and passive range of motion. Differences smaller than these SDDs found in clinical or research settings are attributable to measurement error and do not indicate improvement.

  13. Enhanced Strain Measurement Range of an FBG Sensor Embedded in Seven-Wire Steel Strands

    Directory of Open Access Journals (Sweden)

    Jae-Min Kim

    2017-07-01

    Full Text Available FBG sensors offer many advantages, such as a lack of sensitivity to electromagnetic waves, small size, high durability, and high sensitivity. However, their maximum strain measurement range is lower than the yield strain range (about 1.0% of steel strands when embedded in steel strands. This study proposes a new FBG sensing technique in which an FBG sensor is recoated with polyimide and protected by a polyimide tube in an effort to enhance the maximum strain measurement range of FBG sensors embedded in strands. The validation test results showed that the proposed FBG sensing technique has a maximum strain measurement range of 1.73% on average, which is 1.73 times higher than the yield strain of the strands. It was confirmed that recoating the FBG sensor with polyimide and protecting the FBG sensor using a polyimide tube could effectively enhance the maximum strain measurement range of FBG sensors embedded in strands.

  14. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    Science.gov (United States)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  15. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  16. Measuring and detecting errors in occupational coding: an analysis of SHARE data

    NARCIS (Netherlands)

    Belloni, M.; Brugiavini, A.; Meschi, E.; Tijdens, K.

    2016-01-01

    This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE) using a

  17. Human error views : a framework for benchmarking organizations and measuring the distance between academia and industry

    NARCIS (Netherlands)

    Karanikas, Nektarios

    2015-01-01

    The paper presents a framework that through structured analysis of accident reports explores the differences between practice and academic literature as well amongst organizations regarding their views on human error. The framework is based on the hypothesis that the wording of accident reports

  18. Correction of error in two-dimensional wear measurements of cemented hip arthroplasties

    NARCIS (Netherlands)

    The, Bertram; Mol, Linda; Diercks, Ron L.; van Ooijen, Peter M. A.; Verdonschot, Nico

    The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We

  19. Determination and error analysis of emittance and spectral emittance measurements by remote sensing. [of leaves, soil and plant canopies

    Science.gov (United States)

    Kumar, R.

    1977-01-01

    Theoretical and experimental determinations of the emittance of soils and leaves are reviewed, and an error analysis of emittance and spectral emittance measurements is developed as an aid to remote sensing applications. In particular, an equation for the upper bound of the absolute error in an emittance determination is derived. The absolute error is found to decrease with an increase in contact temperature and to increase with an increase in environmental integrated radiant flux density. The difference between temperature and band radiance temperature is plotted as a function of emittance for the wavelength intervals 4.5 to 5.5 microns, 8 to 13.5 microns and 10.2 to 12.5 microns.

  20. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  1. A Measurement Error Model for Physical Activity Level as Measured by a Questionnaire With Application to the 1999–2006 NHANES Questionnaire

    Science.gov (United States)

    Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S.

    2013-01-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40–69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999–2000). Valid estimates of participants’ total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level (“truth”). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32–0.41); attenuation factors (0.43–0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error–adjusted estimates of relationships between physical activity and disease. PMID:23595007

  2. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  4. MAX-DOAS measurements of HONO slant column densities during the MAD-CAT campaign: inter-comparison, sensitivity studies on spectral analysis settings, and error budget

    Science.gov (United States)

    Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas

    2017-10-01

    In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs

  5. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  7. Coherent change detection and interferometric ISAR measurements in the folded compact range

    Energy Technology Data Exchange (ETDEWEB)

    Sorensen, K.W.

    1996-08-01

    A folded compact range configuration has been developed ant the Sandia National Laboratories` compact range antenna and radar-cross- section measurement facility as a means of performing indoor, environmentally-controlled, far-field simulations of synthetic aperture radar (SAR) measurements of distributed target samples (i.e. gravel, sand, etc.). The folded compact range configuration has previously been used to perform coherent-change-detection (CCD) measurements, which allow disturbances to distributed targets on the order of fractions of a wavelength to be detected. This report describes follow-on CCD measurements of other distributed target samples, and also investigates the sensitivity of the CCD measurement process to changes in the relative spatial location of the SAR sensor between observations of the target. Additionally, this report describes the theoretical and practical aspects of performing interferometric inverse-synthetic-aperture-radar (IFISAR) measurements in the folded compact range environment. IFISAR measurements provide resolution of the relative heights of targets with accuracies on the order of a wavelength. Several examples are given of digital height maps that have been generated from measurements performed at the folded compact range facility.

  8. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    OpenAIRE

    Francisco Moreira; Sousa, Rui M., ed. lit.; Celina P Leão; Anabela C Alves; Lima, Rui M.

    2009-01-01

    This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE) process implemented in the Integrated Master degree on Industrial Management and Engineering (IME) at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be ...

  9. Reliability and responsiveness of a goniometric device for measuring the range of motion in the dart-throwing motion plane.

    Science.gov (United States)

    Kasubuchi, Kenji; Dohi, Yoshihiro; Fujita, Hiroyuki; Fukumoto, Takahiko

    2018-02-26

    Dart-throwing motion (DTM) is an important component of wrist function and, consequently, has the potential to become an evaluation tool in rehabilitation. However, no measurement method is currently available to reliably measure range of motion (ROM) of the wrist in the DTM plane. To determine the reliability and responsiveness of a goniometric device to measure wrist ROM in the DTM plane. ROM of the wrist in the DTM plane was measured in 70 healthy participants. The intra-class correlation coefficient (ICC) was used to evaluate the relative reliability of measurement, and a Bland-Altman analysis conducted to establish its absolute reliability, including the 95% limits of agreement (95% LOA). The standard error of the measurement (SEM) and minimal detectable change at the 95% confidence level (MDC 95 ) were calculated as measures of responsiveness. The intra-rater ICC was 0.87, and an inter-rater ICC of 0.71. There was no evidence of a fixed or proportional bias. For intra- and inter-rater reliability, 95% LOA ranged from -13.83 to 11.12 and from -17.75 to 16.19, respectively. The SEM and MDC 95 were 4.5° and 12.4°, respectively, for intra-rater reliability, and 6.0° and 16.6°, respectively, for inter-rater reliability. The ROM of the wrist in the DTM plane was measured with fair-to-good reliability and responsiveness and, therefore, has the potential to become an evaluation tool for rehabilitation.

  10. Airborne Measurements of CO2 Column Concentration and Range Using a Pulsed Direct-Detection IPDA Lidar

    Science.gov (United States)

    Abshire, James B.; Ramanathan, Anand; Riris, Haris; Mao, Jianping; Allan, Graham R.; Hasselbrack, William E.; Weaver, Clark J.; Browell, Edward V.

    2013-01-01

    We have previously demonstrated a pulsed direct detection IPDA lidar to measure range and the column concentration of atmospheric CO2. The lidar measures the atmospheric backscatter profiles and samples the shape of the 1,572.33 nm CO2 absorption line. We participated in the ASCENDS science flights on the NASA DC-8 aircraft during August 2011 and report here lidar measurements made on four flights over a variety of surface and cloud conditions near the US. These included over a stratus cloud deck over the Pacific Ocean, to a dry lake bed surrounded by mountains in Nevada, to a desert area with a coal-fired power plant, and from the Rocky Mountains to Iowa, with segments with both cumulus and cirrus clouds. Most flights were to altitudes >12 km and had 5-6 altitude steps. Analyses show the retrievals of lidar range, CO2 column absorption, and CO2 mixing ratio worked well when measuring over topography with rapidly changing height and reflectivity, through thin clouds, between cumulus clouds, and to stratus cloud tops. The retrievals shows the decrease in column CO2 due to growing vegetation when flying over Iowa cropland as well as a sudden increase in CO2 concentration near a coal-fired power plant. For regions where the CO2 concentration was relatively constant, the measured CO2 absorption lineshape (averaged for 50 s) matched the predicted shapes to better than 1% RMS error. For 10 s averaging, the scatter in the retrievals was typically 2-3 ppm and was limited by the received signal photon count. Retrievals were made using atmospheric parameters from both an atmospheric model and from in situ temperature and pressure from the aircraft. The retrievals had no free parameters and did not use empirical adjustments, and >70% of the measurements passed screening and were used in analysis. The differences between the lidar-measured retrievals and in situ measured average CO2 column concentrations were 6 km.

  11. Optimal frequency range for medical radar measurements of human heartbeats using body-contact radar.

    Science.gov (United States)

    Brovoll, Sverre; Aardal, Øyvind; Paichard, Yoann; Berger, Tor; Lande, Tor Sverre; Hamran, Svein-Erik

    2013-01-01

    In this paper the optimal frequency range for heartbeat measurements using body-contact radar is experimentally evaluated. A Body-contact radar senses electromagnetic waves that have penetrated the human body, but the range of frequencies that can be used are limited by the electric properties of the human tissue. The optimal frequency range is an important property needed for the design of body-contact radar systems for heartbeat measurements. In this study heartbeats are measured using three different antennas at discrete frequencies from 0.1 - 10 GHz, and the strength of the received heartbeat signal is calculated. To characterize the antennas, when in contact with the body, two port S-parameters(†) are measured for the antennas using a pork rib as a phantom for the human body. The results shows that frequencies up to 2.5 GHz can be used for heartbeat measurements with body-contact radar.

  12. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  13. Error Analysis of High Frequency Core Loss Measurement for Low-Permeability Low-Loss Magnetic Cores

    DEFF Research Database (Denmark)

    Niroumand, Farideh Javidi; Nymand, Morten

    2016-01-01

    in magnetic cores is B-H loop measurement where two windings are placed on the core under test. However, this method is highly vulnerable to phase shift error, especially for low-permeability, low-loss cores. Due to soft saturation and very low core loss, low-permeability low-loss magnetic cores are favorable...... in many of the high-efficiency high power-density power converters. Magnetic powder cores, among the low-permeability low-loss cores, are very attractive since they possess lower magnetic losses in compared to gapped ferrites. This paper presents an analytical study of the phase shift error in the core....... The analysis has been validated by experimental measurements for relatively low-loss magnetic cores with different permeability values....

  14. Measurement of dragging of inertial frames and gravitomagnetic field using laser-ranged satellites.

    Science.gov (United States)

    Ciufolini, I.; Lucchesi, D.; Vespe, F.; Mandiello, A.

    1996-05-01

    By analysing the observations of the orbits of the laser-ranged satellites LAGEOS and LAGEOS II, using the program GEODYN, the authors have obtained the first direct measurement of the Lense-Thirring effect, or dragging of inertial frames and the first direct experimental evidence for the gravitomagnetic field. The accuracy of their measurement is of about 30%.

  15. Expanding the dynamic measurement range for polymeric nanoparticle pH sensors

    DEFF Research Database (Denmark)

    Sun, Honghao; Almdal, Kristoffer; Andresen, Thomas Lars

    2011-01-01

    Conventional optical nanoparticle pH sensors that are designed for ratiometric measurements in cells have been based on utilizing one sensor fluorophore and one reference fluorophore in each nanoparticle, which results in a relatively narrow dynamic measurement range. This results in substantial...

  16. Variation in measurements of range of motion : a study in reflex sympathetic dystrophy patients

    NARCIS (Netherlands)

    Geertzen, J.H.B.; Dijkstra, P.U.; Stewart, R.E; Groothoff, J.W.; ten Duis, H J; Eisma, W.H.

    1998-01-01

    Objective: To quantify the amount of variation attributed to different sources of variation in measurement results of upper extremity range of motion, and to estimate the smallest detectable difference (SDD) between measurements in reflex sympathetic dystrophy (RSD) patients. Design: Two observers

  17. From transmission error measurement to Pulley-Belt slip determination in serpentine belt drives : influence of tensioner and belt characteristics

    OpenAIRE

    Manin, Lionel; Michon, Guilhem; Rémond, Didier; Dufour, Regis

    2009-01-01

    Serpentine belt drives are often used in front end accessory drive of automotive engine. The accessories resistant torques are getting higher within new technological innovations as stater-alternator, and belt transmissions are always asked for higher capacity. Two kind of tensioners are used to maintain minimum tension that insure power transmission and minimize slip: dry friction or hydraulic tensioners. An experimental device and a specific transmission error measurement method have been u...

  18. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    Science.gov (United States)

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  19. Measuring spatial transmission of white maize prices between South Africa and Mozambique: An asymmetric error correction model approach

    OpenAIRE

    Acosta, Alejandro

    2012-01-01

    Over the last decade, Mozambique has experienced drastic increases in food prices, with serious implications for households’ real income. A deeper understanding of how food prices are spatially transmitted from global to domestic markets is thus fundamental for designing policy measures to reduce poverty and food insecurity. This study assesses the spatial transmission of white maize prices between South Africa and Mozambique using an asymmetric error correction model to estimate the speed ...

  20. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1976-01-01

    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  1. Image registration error variance as a measure of overlay quality. [satellite data processing

    Science.gov (United States)

    Mcgillem, C. D.; Svedlow, M.

    1976-01-01

    When one image (the signal) is to be registered with a second image (the signal plus noise) of the same scene, one would like to know the accuracy possible for this registration. This paper derives an estimate of the variance of the registration error that can be expected via two approaches. The solution in each instance is found to be a function of the effective bandwidth of the signal and the noise, and the signal-to-noise ratio. Application of these results to LANDSAT-1 data indicates that for most cases, registration variances will be significantly less than the diameter of one picture element.

  2. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  3. Clinical measurement of range of motion. Review of goniometry emphasizing reliability and validity.

    Science.gov (United States)

    Gajdosik, R L; Bohannon, R W

    1987-12-01

    Clinical measurement of range of motion is a fundamental evaluation procedure with ubiquitous application in physical therapy. Objective measurements of ROM and correct interpretation of the measurement results can have a substantial impact on the development of the scientific basis of therapeutic interventions. The purpose of this article is to review the related literature on the reliability and validity of goniometric measurements of the extremities. Special emphasis is placed on how the reliability of goniometry is influenced by instrumentation and procedures, differences among joint actions and body regions, passive versus active measurements, intratester versus intertester measurements, and different patient types. Our discussion of validity encourages objective interpretation of the meaning of ROM measurements in light of the purposes and the limitations of goniometry. We conclude that clinicians should adopt standardized methods of testing and should interpret and report goniometric results as ROM measurements only, not as measurements of factors that may affect ROM.

  4. Simultaneous measurement of spectra at multiple ranges using a single spectrometer.

    Science.gov (United States)

    Lienert, Barry; Porter, John; Sharma, Shiv K

    2009-08-20

    We have designed and built an instrument having the capability to measure and display spectra at multiple ranges near simultaneously in real time. An excitation laser beam is oriented parallel to and offset from the axis of the light collection optics. The image of the laser beam is then displaced with range. Multiple optical fibers collect the displaced images at different ranges. The output ends of these fibers are positioned vertically along the input slit of a spectrometer that disperses the light from each fiber along different rows of the spectrometer's two-dimensional detector array. The detector array rows then give an immediate visual comparison of spectra at different ranges. A small prototype of this system covering a range from 3 to 13 m has been built. It has been successfully tested using containers holding two distinct fluorescent dyes. Numerical simulations indicate that the technique can be extended to longer-range systems.

  5. Use of rigid-body motion for the investigation and estimation of the measurement errors related to digital image correlation technique

    Science.gov (United States)

    Haddadi, H.; Belhabib, S.

    2008-02-01

    The aim of this work is to investigate the sources of errors related to digital image correlation (DIC) technique applied to strain measurements. The knowledge of such information is important before the measured kinematic fields can be exploited. After recalling the principle of DIC, some sources of errors related to this technique are listed. Both numerical and experimental tests, based on rigid-body motion, are proposed. These tests are simple and easy-to-implement. They permit to quickly assess the errors related to lighting, the optical lens (distortion), the CCD sensor, the out-of-plane displacement, the speckle pattern, the grid pitch, the size of the subset and the correlation algorithm. The errors sources that cannot be uncoupled were estimated by amplifying their contribution to the global error. The obtained results permit to address a classification of the error related to the used equipment. The paper ends by some suggestions proposed in order to minimize the errors.

  6. Near Field HF Antenna Pattern Measurement Method Using an Antenna Pattern Range

    Science.gov (United States)

    2015-12-01

    TECHNICAL REPORT 3006 December 2015 Near-Field HF Antenna Pattern Measurement Method Using an Antenna Pattern Range Ani Siripuram Michael Daly...link budget. This report focuses on computing absolute gain for HF antennas measured on the APR. Recent research efforts by SSC Pacific’s Applied...Electromagnetics Branch (Code 52250) show that the APR extends to accurate measurement of normalized far-field radiation patterns of HF antennas. The

  7. On Measurement of Efficiency of Cobb-Douglas Production Function with Additive and Multiplicative Errors

    Directory of Open Access Journals (Sweden)

    Md. Moyazzem Hossain

    2015-02-01

    Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.

  8. Measuring Residential Segregation With the ACS: How the Margin of Error Affects the Dissimilarity Index.

    Science.gov (United States)

    Napierala, Jeffrey; Denton, Nancy

    2017-02-01

    The American Community Survey (ACS) provides valuable, timely population estimates but with increased levels of sampling error. Although the margin of error is included with aggregate estimates, it has not been incorporated into segregation indexes. With the increasing levels of diversity in small and large places throughout the United States comes a need to track accurately and study changes in racial and ethnic segregation between censuses. The 2005-2009 ACS is used to calculate three dissimilarity indexes (D) for all core-based statistical areas (CBSAs) in the United States. We introduce a simulation method for computing segregation indexes and examine them with particular regard to the size of the CBSAs. Additionally, a subset of CBSAs is used to explore how ACS indexes differ from those computed using the 2000 and 2010 censuses. Findings suggest that the precision and accuracy of D from the ACS is influenced by a number of factors, including the number of tracts and minority population size. For smaller areas, point estimates systematically overstate actual levels of segregation, and large confidence intervals lead to limited statistical power.

  9. Quantitative shearography: error reduction by using more than three measurement channels

    OpenAIRE

    Charrett, Thomas O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-01

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for q...

  10. Accuracy and Reliability of Visual Inspection and Smartphone Applications for Measuring Finger Range of Motion.

    Science.gov (United States)

    Lee, Hannah H; St Louis, Kwesi; Fowler, John R

    2018-01-08

    Measurement of finger range of motion is critical in clinical settings, especially for outcome analysis, clinical decision making, and rehabilitation/disability assessment. Although goniometer measurement is clinically considered the gold standard, its accuracy compared with the true radiographic measurements of the joint angles remains questionable. The authors compared 3 smartphone applications and visual inspection measurements of the finger joints with the radiographic measurements and determined interrater reliability for these measurement tools. A finger was held in place using an aluminum-alloy splint, and a fluoroscopic image was acquired by a mini C-arm. An independent observer measured each joint flexion angle of the fluoroscopic image using a universal handheld goniometer, and this was used as the reference. Finger joint flexion angles were then independently measured by 3 observers using 3 different smartphone applications. In addition, visual inspection was used to estimate the flexion angles of finger joints. The results of this study suggest that all 3 smartphone measurement tools, as well as visual inspection, agree and correlate well with the reference fluoroscopic image measurement. Average differences between the fluoroscopic image measurements with the measured angles using the tools studied ranged from 9.4° to 12.2°. The mean correlation coefficients for each smartphone application exceeded 0.7. Overall interrater reliabilities were similar, with the interclass correlation coefficient being greater than 0.9 for all of the measurement tools. These data suggest that new smartphone applications hold promise for providing accurate and reliable measures of range of motion. [Orthopedics. 201x; xx(x):xx-xx.]. Copyright 2018, SLACK Incorporated.

  11. Earth gravity field modeling and relativistic measurements with laser-ranged satellites and the LARASE research program

    Science.gov (United States)

    Pucacco, Giuseppe; Lucchesi, David; Anselmo, Luciano; Bassan, Massimo; Magnafico, Carmelo; Pardini, Carmen; Peron, Roberto; Stanga, Ruggero; Visco, Massimo

    2017-04-01

    The importance of General Relativity (GR) for space geodesy — and for geodesy in general — is well known since several decades and it has been confirmed by a number of very significant results. For instance, GR plays a fundamental role for the following very notable techniques: Satellite-and-Lunar Laser Ranging (SLR/LLR), Very Long Baseline Interferometry (VLBI), Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS), and Global Navigation Satellite Systems (GNSS). Each of these techniques is intimately and closely related with both GR and geodesy, i.e. they are linked in a loop where benefits in one field provide positive improvements in the other ones. A common ingredient for a suitable and reliable use of each of these techniques is represented by the knowledge of the Earth's gravitational field, both in its static and temporal dependence. Spaceborne gravimetry, with the inclusion of accelerometers and gradiometers on board dedicated satellites, together with microwave links between satellites and GPS measurements, have allowed a huge improvement in the determination of the Earth's geopotential during the last 15 years. In the near future, further improvements are expected in this knowledge thanks to the inclusion of laser inter-satellite link and the possibility to compare frequency and atomic standards by a direct use of atomic clocks, both on the Earth's surface and in space. Such results will be also important for the possibility to further improve the GR tests and measurements in the field of the Earth with laser-ranged satellites in order to compare the predictions of Einstein's theory with those of other (proposed) relativistic theories for the interpretation of the gravitational interaction. Within the present paper we describe the state of the art of such measurements with geodetic satellites, as the two LAGEOS and LARES, and we discuss the effective impact of the systematic errors of gravitational origin on the measurement of

  12. A measure of the impact of CV incompleteness on prediction error estimation with application to PCA and normalization.

    Science.gov (United States)

    Hornung, Roman; Bernau, Christoph; Truntzer, Caroline; Wilson, Rory; Stadler, Thomas; Boulesteix, Anne-Laure

    2015-11-04

    In applications of supervised statistical learning in the biomedical field it is necessary to assess the prediction error of the respective prediction rules. Often, data preparation steps are performed on the dataset-in its entirety-before training/test set based prediction error estimation by cross-validation (CV)-an approach referred to as "incomplete CV". Whether incomplete CV can result in an optimistically biased error estimate depends on the data preparation step under consideration. Several empirical studies have investigated the extent of bias induced by performing preliminary supervised variable selection before CV. To our knowledge, however, the potential bias induced by other data preparation steps has not yet been examined in the literature. In this paper we investigate this bias for two common data preparation steps: normalization and principal component analysis for dimension reduction of the covariate space (PCA). Furthermore we obtain preliminary results for the following steps: optimization of tuning parameters, variable filtering by variance and imputation of missing values. We devise the easily interpretable and general measure CVIIM ("CV Incompleteness Impact Measure") to quantify the extent of bias induced by incomplete CV with respect to a data preparation step of interest. This measure can be used to determine whether a specific data preparation step should, as a general rule, be performed in each CV iteration or whether an incomplete CV procedure would be acceptable in practice. We apply CVIIM to large collections of microarray datasets to answer this question for normalization and PCA. Performing normalization on the entire dataset before CV did not result in a noteworthy optimistic bias in any of the investigated cases. In contrast, when performing PCA before CV, medium to strong underestimates of the prediction error were observed in multiple settings. While the investigated forms of normalization can be safely performed before CV, PCA

  13. Ankle joint range of motion measurements in spastic cerebral palsy children: intraobserver and interobserver reliability and reproducibility of goniometry and visual estimation.

    Science.gov (United States)

    Allington, Nanni J; Leroy, Nathalie; Doneux, Carole

    2002-07-01

    The aim of this study was to assess the intra- and interobserver reliability and reproducibility of goniometry and visual estimation of ankle joint range of motion measurements in children with spastic cerebral palsy. Forty-six ankles of 24 spastic cerebral palsy children were measured under a strict protocol. The global mean measurement error was 5 degrees (SD, 5 degrees) for intra- and interobserver measurements and 3 degrees (SD, 3 degrees) for goniometry versus visual estimation. Statistical analysis showed a high reliability for intra- and interobserver measurements (r>0.75), between visual estimation and goniometry (correlation coefficient, r>0.967; concordance coefficient, r>0.957). Both visual estimation and goniometry ankle range-of-motion measurements are reliable and reproducible in spastic cerebral palsy children if a strict but simple protocol is applied.

  14. An impedance bridge measuring the capacitance ratio in the high frequency range up to 1 MHz

    Science.gov (United States)

    Bee Kim, Dan; Kew Lee, Hyung; Kim, Wan-Seop

    2017-02-01

    This paper describes a 2-terminal-pair impedance bridge, measuring the capacitance ratio in the high frequency range up to 1 MHz. The bridge was configured with two voltage sources and a phase control unit which enabled the bridge balance by synchronizing the voltage sources with an enhanced phase resolution. Without employing the transformers such as inductive voltage divider, injection and detection transformers, etc, the bridge system is quite simple to set up, and the balance procedure is quick and easy. Using this dual-source coaxial bridge, the 1:1 and 10:1 capacitance ratios were measured with 1 pF-1 nF capacitors in the frequency range from 1 kHz to 1 MHz. The measurement values obtained by the dual-source bridge were then compared with reference values measured using a commercial precision capacitance bridge of AH2700A, the Z-matrix method developed by ourselves, and the 4-terminal-pair coaxial bridge by the Czech Metrological Institute. All the measurements agreed within the reference uncertainty range of an order of 10-6-10-5, proving the bridge ability as a trustworthy tool for measuring the capacitance ratio in the high frequency range.

  15. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    Science.gov (United States)

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.

  16. Evaluation of EIT systems and algorithms for handling full void fraction range in two-phase flow measurement

    Science.gov (United States)

    Jia, Jiabin; Wang, Mi; Faraj, Yousef

    2015-01-01

    In the aqueous-based two-phase flow, if the void fraction of dispersed phase exceeds 0.25, conventional electrical impedance tomography (EIT) produces a considerable error due to the linear approximation of the sensitivity back-projection (SBP) method, which limits the EIT’s wider application in the process industry. In this paper, an EIT sensing system which is able to handle full void fraction range in two-phase flow is reported. This EIT system employs a voltage source, conducts true mutual impedance measurement and reconstructs an online image with the modified sensitivity back-projection (MSBP) algorithm. The capability of the Maxwell relationship to convey full void fraction is investigated. The limitation of the linear sensitivity back-projection method is analysed. The MSBP algorithm is used to derive relative conductivity change in the evaluation. A series of static and dynamic experiments demonstrating the mean void fraction obtained using this EIT system has a good agreement with reference void fractions over the range from 0 to 1. The combination of the new EIT system and MSBP algorithm would significantly extend the applications of EIT in industrial process measurement.

  17. Propagation Loss Measurements at 400 Hertz in the BIFI Range Using a Towed Source.

    Science.gov (United States)

    1971-01-27

    series using the BIFI Range (Reference 1) located between Block Island, Rhode Island and Fishers Island, New York. Three types of acoustic tests were... Colossus theoretical predictions (Reference 7). The agreement is fairly good; it is apparent, howev er, that the Colossus predictions do not take into...given in reference 11. - Propagation loss was measured as a function of range and the results compared to the Colossus predictions’ (reference 7). The

  18. Varying the item format improved the range of measurement in patient-reported outcome measures assessing physical function

    DEFF Research Database (Denmark)

    Liegl, Gregor; Gandek, Barbara; Fischer, H. Felix

    2017-01-01

    Background: Physical function (PF) is a core patient-reported outcome domain in clinical trials in rheumatic diseases. Frequently used PF measures have ceiling effects, leading to large sample size requirements and low sensitivity to change. In most of these instruments, the response category...... easy, increases the range of precise measurement of self-reported PF. Methods: Three five-item PF short forms were constructed from the Patient-Reported Outcomes Measurement Information System (PROMIS®) wave 1 data. All forms included the same physical activities but varied in item stem and response...... precision between the short forms using different item formats. Results: Sufficient unidimensionality of all short-form items and the original PF item bank was supported. Compared to formats A and B, format C increased the range of reliable measurement by about 0.5 standard deviations on the positive side...

  19. Accurate Measurement of First Metatarsophalangeal Range of Motion in Patients With Hallux Rigidus.

    Science.gov (United States)

    Vulcano, Ettore; Tracey, Joseph A; Myerson, Mark S

    2016-05-01

    The reliability of range of motion (ROM) measurements has not been established for the hallux metatarsophalangeal (MTP) joint in patients with hallux rigidus. The aim of the present study was to prospectively assess the clinical versus radiographic difference in ROM of the arthritic hallux MTP joint. One hundred consecutive patients who presented with any grade of hallux rigidus were included in this prospective study to determine the hallux MTP range of motion. Clinical range of motion using a goniometer and radiographic range of motion on dynamic x-rays was recorded. The mean difference between clinical and radiographic dorsiflexion was 13 degrees (P dorsiflexion was equal to or less than radiographically measured dorsiflexion. The difference was significantly greater in patients with a clinical dorsiflexion of less than 30 degrees than in patients with 30 degrees or more. Radiographic measurement of hallux dorsiflexion had an excellent intra- and interobserver reliability. We describe a reliable, reproducible, and straightforward method of measuring hallux MTP ROM that improved upon measuring clinical ROM. Level II, prospective comparative study. © The Author(s) 2015.

  20. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J. [Laboratoire de Physique Nucléaire et des Hautes Énergies, Université Pierre et Marie Curie Paris 6, Université Paris Diderot Paris 7, CNRS-IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C.; Chotard, N.; Copin, Y.; Gangler, E. [Université de Lyon, Université Lyon 1, CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, 69622 Villeurbanne (France); Feindt, U.; Kerschhaggl, M.; Kowalski, M. [Physikalisches Institut, Universität Bonn, Nußallee 12, D-53115 Bonn (Germany); and others

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.


</