WorldWideScience

Sample records for range measurement error

  1. Error Analysis of Relative Calibration for RCS Measurement on Ground Plane Range

    Directory of Open Access Journals (Sweden)

    Wu Peng-fei

    2012-03-01

    Full Text Available Ground plane range is a kind of outdoor Radar Cross Section (RCS test range used for static measurement of full-size or scaled targets. Starting from the characteristics of ground plane range, the impact of environments on targets and calibrators is analyzed during calibration in the RCS measurements. The error of relative calibration produced by the different illumination of target and calibrator is studied. The relative calibration technique used in ground plane range is to place the calibrator on a fixed and auxiliary pylon somewhere between the radar and the target under test. By considering the effect of ground reflection and antenna pattern, the relationship between the magnitude of echoes and the position of calibrator is discussed. According to the different distances between the calibrator and target, the difference between free space and ground plane range is studied and the error of relative calibration is calculated. Numerical simulation results are presented with useful conclusions. The relative calibration error varies with the position of calibrator, frequency and antenna beam width. In most case, set calibrator close to the target may keep the error under control.

  2. Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators

    Science.gov (United States)

    Flowers, Paul A.

    1997-07-01

    Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values.

  3. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  4. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  5. Standard error of measurement of five health utility indexes across the range of health for use in estimating reliability and responsiveness

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M.; Feeny, David; Cherepanov, Dasha; Fryback, Dennis

    2011-01-01

    Background Standard errors of measurement (SEMs) of health related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics and provides guidance on using indexes on the individual and group level. SEM is also a component of reliability. Purpose To estimate SEM of five HRQoL indexes. Design The National Health Measurement Study (NHMS) was a population based telephone survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures 1 and 6 months post cataract surgery. Subjects 3844 randomly selected adults from the non-institutionalized population 35 to 89 years old in the contiguous United States and 265 cataract patients. Measurements The SF6-36v2™, QWB-SA, EQ-5D, HUI2 and HUI3 were included. An item-response theory (IRT) approach captured joint variation in indexes into a composite construct of health (theta). We estimated: (1) the test-retest standard deviation (SEM-TR) from COMHS, (2) the structural standard deviation (SEM-S) around the composite construct from NHMS and (3) corresponding reliability coefficients. Results SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2) and 0.134 (HUI3), while SEM-S was 0.071, 0.094, 0.084, 0.074 and 0.117, respectively. These translate into reliability coefficients for SF-6D: 0.66 (COMHS) and 0.71 (NHMS), for QWB: 0.59 and 0.64, for EQ-5D: 0.61 and 0.70 for HUI2: 0.64 and 0.80, and for HUI3: 0.75 and 0.77, respectively. The SEM varied considerably across levels of health, especially for HUI2, HUI3 and EQ-5D, and was strongly influenced by ceiling effects. Limitations Repeated measures were five months apart and estimated theta contain measurement error. Conclusions The two types of SEM are similar and substantial for all the indexes, and vary across the range of health. PMID:20935280

  6. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  7. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  8. Soliton microcomb range measurement

    Science.gov (United States)

    Suh, Myoung-Gyun; Vahala, Kerry J.

    2018-02-01

    Laser-based range measurement systems are important in many application areas, including autonomous vehicles, robotics, manufacturing, formation flying of satellites, and basic science. Coherent laser ranging systems using dual-frequency combs provide an unprecedented combination of long range, high precision, and fast update rate. We report dual-comb distance measurement using chip-based soliton microcombs. A single pump laser was used to generate dual-frequency combs within a single microresonator as counterpropagating solitons. We demonstrated time-of-flight measurement with 200-nanometer precision at an averaging time of 500 milliseconds within a range ambiguity of 16 millimeters. Measurements at distances up to 25 meters with much lower precision were also performed. Our chip-based source is an important step toward miniature dual-comb laser ranging systems that are suitable for photonic integration.

  9. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  10. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  11. Standard error of measurement of 5 health utility indexes across the range of health for use in estimating reliability and responsiveness.

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G

    2011-01-01

    Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.

  12. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  13. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  14. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  15. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  16. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests

    Science.gov (United States)

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...

  17. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  18. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  19. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  20. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  1. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  2. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  3. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  4. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  5. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  6. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  7. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  8. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x...... measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction is less than...

  9. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  10. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  11. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  12. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  13. System tuning and measurement error detection testing

    International Nuclear Information System (INIS)

    Krejci, Petr; Machek, Jindrich

    2008-09-01

    The project includes the use of the PEANO (Process Evaluation and Analysis by Neural Operators) system to verify the monitoring of the status of dependent measurements with a view to early measurement fault detection and estimation of selected signal levels. At the present stage, the system's capabilities of detecting measurement errors was assessed and the quality of the estimates was evaluated for various system configurations and the formation of empiric models, and rules were sought for system training at chosen process data recording parameters and operating modes. The aim was to find a suitable system configuration and to document the quality of the tuned system on artificial failures

  14. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  15. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  16. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    the incidence of measurement errors using these techniques generally revealed coefficient estimates of ... physical, biological, social and medical science, measurement errors are found. The errors are ... (M) and Science and Technology (ST).

  17. Measurement error in longitudinal film badge data

    International Nuclear Information System (INIS)

    Marsh, J.L.

    2002-04-01

    The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context

  18. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  19. Modeling and estimation of measurement errors

    International Nuclear Information System (INIS)

    Neuilly, M.

    1998-01-01

    Any person in charge of taking measures is aware of the inaccuracy of the results however cautiously he may handle. Sensibility, accuracy, reproducibility define the significance of a result. The use of statistical methods is one of the important tools to improve the quality of measurement. The accuracy due to these methods revealed the little difference in the isotopic composition of uranium ore which led to the discovery of Oklo fossil reactor. This book is dedicated to scientists and engineers interested in measurement whatever their investigation interests are. Experimental results are presented as random variables and their laws of probability are approximated by normal law, Poison law or Pearson distribution. The impact of 1 or more parameters on the total error can be evaluated by drawing factorial plans and by using variance analysis methods. This method is also used in intercomparison procedures between laboratories and to detect any abnormal shift in a series of measurement. (A.C.)

  20. Multi-GNSS signal-in-space range error assessment - Methodology and results

    Science.gov (United States)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  1. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  2. Measuring Error Identification and Recovery Skills in Surgical Residents.

    Science.gov (United States)

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  4. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  5. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    Science.gov (United States)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  6. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  7. Incorporating measurement error in n=1 psychological autoregressive modeling

    NARCIS (Netherlands)

    Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive

  8. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  9. Automatic diagnostic system for measuring ocular refractive errors

    Science.gov (United States)

    Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.

    1996-05-01

    Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.

  10. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    Science.gov (United States)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  11. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  12. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS

    International Nuclear Information System (INIS)

    CARDONA, J.; PEGGS, S.; PILAT, R.; PTITSYN, V.

    2004-01-01

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented [2]. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model

  13. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  14. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    Science.gov (United States)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  15. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  16. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  17. Reduction of measurement errors in OCT scanning

    Science.gov (United States)

    Morel, E. N.; Tabla, P. M.; Sallese, M.; Torga, J. R.

    2018-03-01

    Optical coherence tomography (OCT) is a non-destructive optical technique, which uses a light source with a wide band width that focuses on a point in the sample to determine the distance (strictly, the optical path difference, OPD) between this point and a reference surface. The point can be superficial or at an interior interface of the sample (transparent or semitransparent), allowing topographies and / or tomographies in different materials. The Michelson interferometer is the traditional experimental scheme for this technique, in which a beam of light is divided into two arms, one the reference and the other the sample. The overlap of reflected light in the sample and in the reference generates an interference signal that gives us information about the OPD between arms. In this work, we work on the experimental configuration in which the reference signal and the reflected signal in the sample travel on the same arm, improving the quality of the interference signal. Among the most important aspects of this improvement we can mention that the noise and errors produced by the relative reference-sample movement and by the dispersion of the refractive index are considerably reduced. It is thus possible to obtain 3D images of surfaces with a spatial resolution in the order of microns. Results obtained on the topography of metallic surfaces, glass and inks printed on paper are presented.

  18. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  19. A straightness error measurement method matched new generation GPS

    International Nuclear Information System (INIS)

    Zhang, X B; Lu, H; Jiang, X Q; Li, Z

    2005-01-01

    The axis of the non-diffracting beam produced by an axicon is very stable and can be adopted as the datum line to measure the spatial straightness error in continuous working distance, which may be short, medium or long. Though combining the non-diffracting beam datum-line with LVDT displace detector, a new straightness error measurement method is developed. Because the non-diffracting beam datum-line amends the straightness error gauged by LVDT, the straightness error is reliable and this method is matchs new generation GPS

  20. Unit of measurement used and parent medication dosing errors.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  1. Correlated measurement error hampers association network inference

    NARCIS (Netherlands)

    Kaduk, M.; Hoefsloot, H.C.J.; Vis, D.J.; Reijmers, T.; Greef, J. van der; Smilde, A.K.; Hendriks, M.M.W.B.

    2014-01-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the

  2. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  3. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  4. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  5. Određivanje daljine cilja pomoću video senzora i analiza uticaja grešaka i šuma merenja / Target range evaluation using video sensor and analysis of the influence of measurement noise and errors

    Directory of Open Access Journals (Sweden)

    Dragoslav Ugarak

    2006-01-01

    Full Text Available U radu je opisan matematički model određivanja daljine cilja obradom video snimaka u toku praćenja. Analizirani su doprinosi parametara koji utiču na veličinu grešaka i određene su vrednosti standardnog odstupanja. / This paper presents mathematical model of determining target range by analyzing video frame during the tracking. The contribution of effective parameters to accuracy are analyzed and values of standard deviation are determined.

  6. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  7. Estimation of the measurement error of eccentrically installed orifice plates

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Neil; Hodgkinson, Edwin; Reader-Harris, Michael

    2005-07-01

    The presentation discusses methods for simulation and estimation of flow measurement errors. The main conclusions are: Computational Fluid Dynamics (CFD) simulation methods and published test measurements have been used to estimate the error of a metering system over a period when its orifice plates were eccentric and when leaking O-rings allowed some gas to bypass the meter. It was found that plate eccentricity effects would result in errors of between -2% and -3% for individual meters. Validation against test data suggests that these estimates of error should be within 1% of the actual error, but it is unclear whether the simulations over-estimate or under-estimate the error. Simulations were also run to assess how leakage at the periphery affects the metering error. Various alternative leakage scenarios were modelled and it was found that the leakage rate has an effect on the error, but that the leakage distribution does not. Correction factors, based on the CFD results, were then used to predict the system's mis-measurement over a three-year period (tk)

  8. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  9. Evaluation of measurement precision errors at different bone density values

    International Nuclear Information System (INIS)

    Wilson, M.; Wong, J.; Bartlett, M.; Lee, N.

    2002-01-01

    Full text: The precision error commonly used in serial monitoring of BMD values using Dual Energy X Ray Absorptometry (DEXA) is 0.01-0.015g/cm - for both the L2 L4 lumbar spine and total femur. However, this limit is based on normal individuals with bone densities similar to the population mean. The purpose of this study was to systematically evaluate precision errors over the range of bone density values encountered in clinical practice. In 96 patients a BMD scan of the spine and femur was immediately repeated by the same technologist with the patient taken off the bed and repositioned between scans. Nine technologists participated. Values were obtained for the total femur and spine. Each value was classified as low range (0.75-1.05 g/cm ) and medium range (1.05- 1.35g/cm ) for the spine, low range (0.55 0. 85 g/cm ) and medium range (0.85-1.15 g/cm ) for the total femur. Results show that the precision error was significantly lower in the medium range for total femur results with the medium range value at 0.015 g/cm - and the low range at 0.025 g/cm - (p<0.01). No significant difference was found for the spine results. We also analysed precision errors between three technologists and found a significant difference (p=0.05) occurred between only two technologists and this was seen in the spine data only. We conclude that there is some evidence that the precision error increases at the outer limits of the normal bone density range. Also, the results show that having multiple trained operators does not greatly increase the BMD precision error. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  10. An introduction to the measurement errors and data handling

    International Nuclear Information System (INIS)

    Rubio, J.A.

    1979-01-01

    Some usual methods to estimate and correlate measurement errors are presented. An introduction to the theory of parameter determination and goodness of the estimates is also presented. Some examples are discussed. (author)

  11. Fusing metabolomics data sets with heterogeneous measurement errors

    Science.gov (United States)

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  12. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  13. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  14. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  15. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  16. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  17. Measurement errors in voice-key naming latency for Hiragana.

    Science.gov (United States)

    Yamada, Jun; Tamaoka, Katsuo

    2003-12-01

    This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.

  18. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  19. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  20. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  1. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  2. Error Analysis of Ceramographic Sample Preparation for Coating Thickness Measurement of Coated Fuel Particles

    International Nuclear Information System (INIS)

    Liu Xiaoxue; Li Ziqiang; Zhao Hongsheng; Zhang Kaihong; Tang Chunhe

    2014-01-01

    The thicknesses of four coatings of HTR coated fuel particle are very important parameters. It is indispensable to control the thickness of four coatings of coated fuel particles for the safety of HTR. A measurement method, ceramographic sample-microanalysis method, to analyze the thickness of coatings was developed. During the process of ceramographic sample-microanalysis, there are two main errors, including ceramographic sample preparation error and thickness measurement error. With the development of microscopic techniques, thickness measurement error can be easily controlled to meet the design requirements. While, due to the coated particles are spherical particles of different diameters ranged from 850 to 1000μm, the sample preparation process will introduce an error. And this error is different from one sample to another. It’s also different from one particle to another in the same sample. In this article, the error of the ceramographic sample preparation was calculated and analyzed. Results show that the error introduced by sample preparation is minor. The minor error of sample preparation guarantees the high accuracy of the mentioned method, which indicates this method is a proper method to measure the thickness of four coatings of coated particles. (author)

  3. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  4. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  5. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  6. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated

  7. Content Validity of a Tool Measuring Medication Errors.

    Science.gov (United States)

    Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria

    2015-08-01

    The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.

  8. Analysis and improvement of gas turbine blade temperature measurement error

    International Nuclear Information System (INIS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-01-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed. (paper)

  9. Analysis and improvement of gas turbine blade temperature measurement error

    Science.gov (United States)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  10. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  11. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  12. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  13. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  14. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  15. Conditional Standard Errors of Measurement for Scale Scores.

    Science.gov (United States)

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  16. Confounding and exposure measurement error in air pollution epidemiology

    NARCIS (Netherlands)

    Sheppard, L.; Burnett, R.T.; Szpiro, A.A.; Kim, J.Y.; Jerrett, M.; Pope, C.; Brunekreef, B.|info:eu-repo/dai/nl/067548180

    2012-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution.

  17. Measurement error in pressure-decay leak testing

    International Nuclear Information System (INIS)

    Robinson, J.N.

    1979-04-01

    The effect of measurement error in presssure-decay leak testing is considered, and examples are presented to demonstrate how it can be properly accomodated in analyzing data from such tests. Suggestions for more effective specification and conduct of leak tests are presented

  18. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    Science.gov (United States)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  19. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  20. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  1. Confounding and exposure measurement error in air pollution epidemiology.

    Science.gov (United States)

    Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert

    2012-06-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.

  2. Measurement of the magnetic field errors on TCV

    International Nuclear Information System (INIS)

    Piras, F.; Moret, J.-M.; Rossel, J.X.

    2010-01-01

    A set of 24 saddle loops is used on the Tokamak a Configuration Variable (TCV) to measure the radial magnetic flux at different toroidal and vertical positions. The new system is calibrated together with the standard magnetic diagnostics on TCV. Based on the results of this calibration, the effective current in the poloidal field coils and their position is computed. These corrections are then used to compute the distribution of the error field inside the vacuum vessel for a typical TCV discharge. Since the saddle loops measure the magnetic flux at different toroidal positions, the non-axisymmetric error field is also estimated and correlated to a shift or a tilt of the poloidal field coils.

  3. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  4. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    Science.gov (United States)

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  5. Effects of Measurement Error on the Output Gap in Japan

    OpenAIRE

    Koichiro Kamada; Kazuto Masuda

    2000-01-01

    Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...

  6. Statistical method for quality control in presence of measurement errors

    International Nuclear Information System (INIS)

    Lauer-Peccoud, M.R.

    1998-01-01

    In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)

  7. Measurement error in CT assessment of appendix diameter

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2016-12-15

    Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)

  8. Tracking and shape errors measurement of concentrating heliostats

    Science.gov (United States)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  9. Error reduction techniques for measuring long synchrotron mirrors

    International Nuclear Information System (INIS)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP

  10. Error Characterization of Altimetry Measurements at Climate Scales

    Science.gov (United States)

    Ablain, Michael; Larnicol, Gilles; Faugere, Yannice; Cazenave, Anny; Meyssignac, Benoit; Picot, Nicolas; Benveniste, Jerome

    2013-09-01

    Thanks to studies performed in the framework of the SALP project (supported by CNES) since the TOPEX era and more recently in the framework of the Sea- Level Climate Change Initiative project (supported by ESA), strong improvements have been provided on the estimation of the global and regional mean sea level over the whole altimeter period for all the altimetric missions. Thanks to these efforts, a better characterization of altimeter measurements errors at climate scales has been performed and presented in this paper. These errors have been compared to user requirements in order to know if scientific goals are reached by altimeter missions. The main issue of this paper is the importance to enhance the link between altimeter and climate communities to improve or refine user requirements, to better specify future altimeter system for climate applications but also to reprocess older missions beyond their original specifications.

  11. DOI resolution measurement and error analysis with LYSO and APDs

    International Nuclear Information System (INIS)

    Lee, Chae-hun; Cho, Gyuseong

    2008-01-01

    Spatial resolution degradation in PET occurs at the edge of Field Of View (FOV) due to parallax error. To improve spatial resolution at the edge of FOV, Depth-Of-Interaction (DOI) PET has been investigated and several methods for DOI positioning were proposed. In this paper, a DOI-PET detector module using two 8x4 array avalanche photodiodes (APDs) (Hamamatsu, S8550) and a 2 cm long LYSO scintillation crystal was proposed and its DOI characteristics were investigated experimentally. In order to measure DOI positions, signals from two APDs were compared. Energy resolution was obtained from the sum of two APDs' signals and DOI positioning error was calculated. Finally, an optimum DOI step size in a 2 cm long LYSO were suggested to help to design a DOI-PET

  12. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    Science.gov (United States)

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  13. Earth orientation from lunar laser ranging and an error analysis of polar motion services

    Science.gov (United States)

    Dickey, J. O.; Newhall, X. X.; Williams, J. G.

    1985-01-01

    Lunar laser ranging (LLR) data are obtained on the basis of the timing of laser pulses travelling from observatories on earth to retroreflectors placed on the moon's surface during the Apollo program. The modeling and analysis of the LLR data can provide valuable insights into earth's dynamics. The feasibility to model accurately the lunar orbit over the full 13-year observation span makes it possible to conduct relatively long-term studies of variations in the earth's rotation. A description is provided of general analysis techniques, and the calculation of universal time (UT1) from LLR is discussed. Attention is also given to a summary of intercomparisons with different techniques, polar motion results and intercomparisons, and a polar motion error analysis.

  14. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  15. Measurements of stem diameter: implications for individual- and stand-level errors.

    Science.gov (United States)

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when

  16. Validation and Error Characterization for the Global Precipitation Measurement

    Science.gov (United States)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  17. The effect of misclassification errors on case mix measurement.

    Science.gov (United States)

    Sutherland, Jason M; Botz, Chas K

    2006-12-01

    Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.

  18. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  19. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  20. Measurement errors for thermocouples attached to thin plates

    International Nuclear Information System (INIS)

    Sobolik, K.B.; Keltner, N.R.; Beck, J.V.

    1989-01-01

    This paper discusses Unsteady Surface Element (USE) methods which are applied to a model of a thermocouple wire attached to a thin disk. Green's functions are used to develop the integral equations for the wire and the disk. The model can be used to evaluate transient and steady state responses for many types of heat flux measurement devices including thin skin calorimeters and circular foil (Gardon) head flux gauges. The model can accommodate either surface or volumetric heating of the disk. The boundary condition at the outer radius of the disk can be either insulated or constant temperature. Effect on the errors of geometrical and thermal factors can be assessed. Examples are given

  1. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  2. Compact range for variable-zone measurements

    Science.gov (United States)

    Burnside, Walter D.; Rudduck, Roger C.; Yu, Jiunn S.

    1988-08-02

    A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector.

  3. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  4. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...

  5. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  6. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  7. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  8. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  9. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  10. Impact of Glucose Meter Error on Glycemic Variability and Time in Target Range During Glycemic Control After Cardiovascular Surgery.

    Science.gov (United States)

    Karon, Brad S; Meeusen, Jeffrey W; Bryant, Sandra C

    2015-08-25

    We retrospectively studied the impact of glucose meter error on the efficacy of glycemic control after cardiovascular surgery. Adult patients undergoing intravenous insulin glycemic control therapy after cardiovascular surgery, with 12-24 consecutive glucose meter measurements used to make insulin dosing decisions, had glucose values analyzed to determine glycemic variability by both standard deviation (SD) and continuous overall net glycemic action (CONGA), and percentage glucose values in target glucose range (110-150 mg/dL). Information was recorded for 70 patients during each of 2 periods, with different glucose meters used to measure glucose and dose insulin during each period but no other changes to the glycemic control protocol. Accuracy and precision of each meter were also compared using whole blood specimens from ICU patients. Glucose meter 1 (GM1) had median bias of 11 mg/dL compared to a laboratory reference method, while glucose meter 2 (GM2) had a median bias of 1 mg/dL. GM1 and GM2 differed little in precision (CV = 2.0% and 2.7%, respectively). Compared to the period when GM1 was used to make insulin dosing decisions, patients whose insulin dose was managed by GM2 demonstrated reduced glycemic variability as measured by both SD (13.7 vs 21.6 mg/dL, P meter error (bias) was associated with decreased glycemic variability and increased percentage of values in target glucose range for patients placed on intravenous insulin therapy following cardiovascular surgery. © 2015 Diabetes Technology Society.

  11. Characterization of the main error sources of chromatic confocal probes for dimensional measurement

    International Nuclear Information System (INIS)

    Nouira, H; El-Hayek, N; Yuan, X; Anwer, N

    2014-01-01

    Chromatic confocal probes are increasingly used in high-precision dimensional metrology applications such as roughness, form, thickness and surface profile measurements; however, their measurement behaviour is not well understood and must be characterized at a nanometre level. This paper provides a calibration bench for the characterization of two chromatic confocal probes of 20 and 350 µm travel ranges. The metrology loop that includes the chromatic confocal probe is stable and enables measurement repeatability at the nanometer level. With the proposed system, the major error sources, such as the relative axial and radial motions of the probe with respect to the sample, the material, colour and roughness of the measured sample, the relative deviation/tilt of the probe and the scanning speed are identified. Experimental test results show that the chromatic confocal probes are sensitive to these errors and that their measurement behaviour is highly dependent on them. (paper)

  12. On modeling animal movements using Brownian motion with measurement error.

    Science.gov (United States)

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  13. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    International Nuclear Information System (INIS)

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  14. Simulation error propagation for a dynamic rod worth measurement technique

    International Nuclear Information System (INIS)

    Kastanya, D.F.; Turinsky, P.J.

    1996-01-01

    KRSKO nuclear station, subsequently adapted by Westinghouse, introduced the dynamic rod worth measurement (DRWM) technique for measuring pressurized water reactor rod worths. This technique has the potential for reduced test time and primary loop waste water versus alternatives. The measurement is performed starting from a slightly supercritical state with all rods out (ARO), driving a bank in at the maximum stepping rate, and recording the ex-core detectors responses and bank position as a function of time. The static bank worth is obtained by (1) using the ex-core detectors' responses to obtain the core average flux (2) using the core average flux in the inverse point-kinetics equations to obtain the dynamic bank worth (3) converting the dynamic bank worth to the static bank worth. In this data interpretation process, various calculated quantities obtained from a core simulator are utilized. This paper presents an analysis of the sensitivity to the impact of core simulator errors on the deduced static bank worth

  15. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  16. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  17. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  18. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  19. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    Science.gov (United States)

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  20. Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target.

    Science.gov (United States)

    Muralikrishnan, Bala; Rachakonda, Prem; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cheok, Geraldine; Cournoyer, Luc

    2017-12-01

    Terrestrial laser scanners (TLS) are a class of 3D imaging systems that produce a 3D point cloud by measuring the range and two angles to a point. The fundamental measurement of a TLS is range. Relative range error is one component of the overall range error of TLS and its estimation is therefore an important aspect in establishing metrological traceability of measurements performed using these systems. Target geometry is an important aspect to consider when realizing the relative range tests. The recently published ASTM E2938-15 mandates the use of a plate target for the relative range tests. While a plate target may reasonably be expected to produce distortion free data even at far distances, the target itself needs careful alignment at each of the relative range test positions. In this paper, we discuss relative range experiments performed using a plate target and then address the advantages and limitations of using a sphere target. We then present a novel dual-sphere-plate target that draws from the advantages of the sphere and the plate without the associated limitations. The spheres in the dual-sphere-plate target are used simply as fiducials to identify a point on the surface of the plate that is common to both the scanner and the reference instrument, thus overcoming the need to carefully align the target.

  1. Measurement error in the Liebowitz Social Anxiety Scale: results from a general adult population in Japan.

    Science.gov (United States)

    Takada, Koki; Takahashi, Kana; Hirao, Kazuki

    2018-01-17

    Although the self-report version of Liebowitz Social Anxiety Scale (LSAS) is frequently used to measure social anxiety, data is lacking on the smallest detectable change (SDC), an important index of measurement error. We therefore aimed to determine the SDC of LSAS. Japanese adults aged 20-69 years were invited from a panel managed by a nationwide internet research agency. We then conducted a test-retest internet survey with a two-week interval to estimate the SDC at the individual (SDC ind ) and group (SDC group ) levels. The analysis included 1300 participants. The SDC ind and SDC group for the total fear subscale (scoring range: 0-72) were 23.52 points (32.7%) and 0.65 points (0.9%), respectively. The SDC ind and SDC group for the total avoidance subscale (scoring range: 0-72) were 32.43 points (45.0%) and 0.90 points (1.2%), respectively. The SDC ind and SDC group for the overall total score (scoring range: 0-144) were 45.90 points (31.9%) and 1.27 points (0.9%), respectively. Measurement error is large and indicate the potential for major problems when attempting to use the LSAS to detect changes at the individual level. These results should be considered when using the LSAS as measures of treatment change.

  2. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    Directory of Open Access Journals (Sweden)

    Zhengchun Du

    2016-05-01

    Full Text Available The use of three-dimensional (3D data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS. First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  4. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  5. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  6. Error prevention at a radon measurement service laboratory

    International Nuclear Information System (INIS)

    Cohen, B.L.; Cohen, F.

    1989-01-01

    This article describes the steps taken at a high volume counting laboratory to avoid human, instrument, and computer errors. The laboratory analyzes diffusion barrier charcoal adsorption canisters which have been used to test homes and commercial buildings. A series of computer and human cross-checks are utilized to assure that accurate results are reported to the correct client

  7. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  8. CENTIMETER COSMO-SKYMED RANGE MEASUREMENTS FOR MONITORING GROUND DISPLACEMENTS

    Directory of Open Access Journals (Sweden)

    F. Fratarcangeli

    2016-06-01

    Full Text Available The SAR (Synthetic Aperture Radar imagery are widely used in order to monitor displacements impacting the Earth surface and infrastructures. The main remote sensing technique to extract sub-centimeter information from SAR imagery is the Differential SAR Interferometry (DInSAR, based on the phase information only. However, it is well known that DInSAR technique may suffer for lack of coherence among the considered stack of images. New Earth observation SAR satellite sensors, as COSMO-SkyMed, TerraSAR-X, and the coming PAZ, can acquire imagery with high amplitude resolutions too, up to few decimeters. Thanks to this feature, and to the on board dual frequency GPS receivers, allowing orbits determination with an accuracy at few centimetres level, the it was proven by different groups that TerraSAR-X imagery offer the capability to achieve, in a global reference frame, 3D positioning accuracies in the decimeter range and even better just exploiting the slant-range measurements coming from the amplitude information, provided proper corrections of all the involved geophysical phenomena are carefully applied. The core of this work is to test this methodology on COSMO-SkyMed data acquired over the Corvara area (Bolzano – Northern Italy, where, currently, a landslide with relevant yearly displacements, up to decimeters, is monitored, using GPS survey and DInSAR technique. The leading idea is to measure the distance between the satellite and a well identifiable natural or artificial Persistent Scatterer (PS, taking in account the signal propagation delays through the troposphere and ionosphere and filtering out the known geophysical effects that induce periodic and secular ground displacements. The preliminary results here presented and discussed indicate that COSMO-SkyMed Himage imagery appear able to guarantee a displacements monitoring with an accuracy of few centimetres using only the amplitude data, provided few (at least one stable PS’s are

  9. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  10. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  11. The error sources appearing for the gamma radioactive source measurement in dynamic condition

    International Nuclear Information System (INIS)

    Sirbu, M.

    1977-01-01

    The error analysis for the measurement of the gamma radioactive sources, placed on the soil, with the help of the helicopter are presented. The analysis is based on a new formula that takes account of the attenuation gamma ray factor in the helicopter walls. They give a complete error formula and an error diagram. (author)

  12. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  13. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  14. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    Science.gov (United States)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  15. The error analysis of coke moisture measured by neutron moisture gauge

    International Nuclear Information System (INIS)

    Tian Huixing

    1995-01-01

    The error of coke moisture measured by neutron method in the iron and steel industry is analyzed. The errors are caused by inaccurate sampling location in the calibration procedure on site. By comparison, the instrument error and the statistical fluctuation error are smaller. So the sampling proportion should be increased as large as possible in the calibration procedure on site, and a satisfied calibration effect can be obtained on a suitable size hopper

  16. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  17. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  18. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  19. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  20. Suppression of the Nonlinear Zeeman Effect and Heading Error in Earth-Field-Range Alkali-Vapor Magnetometers.

    Science.gov (United States)

    Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry

    2018-01-19

    The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100  Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.

  1. Suppression of the Nonlinear Zeeman Effect and Heading Error in Earth-Field-Range Alkali-Vapor Magnetometers

    Science.gov (United States)

    Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry

    2018-01-01

    The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ˜100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.

  2. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  3. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Science.gov (United States)

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  4. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  5. Influence of Marker Movement Errors on Measuring 3 Dimentional Scapular Position and Orientation

    Directory of Open Access Journals (Sweden)

    Afsoun Nodehi-Moghaddam

    2003-12-01

    Full Text Available Objective: Scapulothoracic muscles weakness or fatique can result in abnormal scapular positioning and compromising scapulo-humeral rhythm and shoulder dysfunction .The scapula moves in a -3 Dimentional fashion so the use of 2-Dimentional Techniques cannot fully capture scapular motion . One of approaches to positioining markers of kinematic systems is to mount each marker directly on the skin generally over a bony anatomical landmarks . Howerer skin movement and Motion of underlying bony structures are not Necessaritly identical and substantial errors may be introduced in the description of bone movement when using skin –mounted markers. evaluation of Influence of marker movement errors on 3-Dimentional scapular position and orientation. Materials & Methods: 10 Healthy subjects with a mean age 30.50 participated in the study . They were tested in three sessions A 3-dimentiional electro mechanical digitizer was used to measure scapular position and orientation measures were obtained while arm placed at the side of the body and elevated 45٫90٫120 and full Rang of motion in the scapular plane . At each test positions six bony landmarks were palpated and skin markers were mounted on them . This procedure repeated in the second test session in third session Removal of markers was not performed through obtaining entire Range of motion after mounting the markers . Results: The intraclass correlation coefficients (ICC for scapulor variables were higher (0.92-0.84 when markers were replaced and re-mounted on bony landmarks with Increasing the angle of elevation. Conclusion: our findings suggested significant markers movement error on measuring the upward Rotation and posterior tilt angle of scapula.

  6. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    Science.gov (United States)

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  7. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  8. Measurements of Capture Efficiency of Range Hoods in Homes

    DEFF Research Database (Denmark)

    Simone, Angela; Sherman, Max H.; Walker, Iain S.

    2015-01-01

    mapped the pollution distribution in the room, and showed that the pollutants escape more at the sides of the cooktop. These preliminary results suggest that more measurements should be conducted investigating the capture efficiency at different pollutant source temperature, size and location...... want a range hood to use little energy and have high capture efficiency to minimize the required air flow to capture the cooking pollutants. Currently there are no standards for rating range hoods for capture efficiency In this study, measurements of range hood capture efficiency were made a tight...... kitchen-room built in a laboratory chamber, and a methodology for standardizing measurement of capture efficiency was developed. The results for a wall mounted range hood, showed that up to half of the cooking pollutants were not captured at a flow rate of 230 m3/h. A more detailed set of measurements...

  9. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  10. Analysis of liquid medication dose errors made by patients and caregivers using alternative measuring devices.

    Science.gov (United States)

    Ryu, Gyeong Suk; Lee, Yu Jeung

    2012-01-01

    Patients use several types of devices to measure liquid medication. Using a criterion ranging from a 10% to 40% variation from a target 5 mL for a teaspoon dose, previous studies have found that a considerable proportion of patients or caregivers make errors when dosing liquid medication with measuring devices. To determine the rate and magnitude of liquid medication dose errors that occur with patient/caregiver use of various measuring devices in a community pharmacy. Liquid medication measurements by patients or caregivers were observed in a convenience sample of community pharmacy patrons in Korea during a 2-week period in March 2011. Participants included all patients or caregivers (N = 300) who came to the pharmacy to buy over-the-counter liquid medication or to have a liquid medication prescription filled during the study period. The participants were instructed by an investigator who was also a pharmacist to select their preferred measuring devices from 6 alternatives (etched-calibration dosing cup, printed-calibration dosing cup, dosing spoon, syringe, dispensing bottle, or spoon with a bottle adapter) and measure a 5 mL dose of Coben (chlorpheniramine maleate/phenylephrine HCl, Daewoo Pharm. Co., Ltd) syrup using the device of their choice. The investigator used an ISOLAB graduated cylinder (Germany, blue grad, 10 mL) to measure the amount of syrup dispensed by the study participants. Participant characteristics were recorded including gender, age, education level, and relationship to the person for whom the medication was intended. Of the 300 participants, 257 (85.7%) were female; 286 (95.3%) had at least a high school education; and 282 (94.0%) were caregivers (parent or grandparent) for the patient. The mean (SD) measured dose was 4.949 (0.378) mL for the 300 participants. In analysis of variance of the 6 measuring devices, the greatest difference from the 5 mL target was a mean 5.552 mL for 17 subjects who used the regular (etched) dosing cup and 4

  11. Development of an Experimental Measurement System for Human Error Characteristics and a Pilot Test

    International Nuclear Information System (INIS)

    Jang, Tong-Il; Lee, Hyun-Chul; Moon, Kwangsu

    2017-01-01

    Some items out of individual and team characteristics were partially selected, and a pilot test was performed to measure and evaluate them using the experimental measurement system of human error characteristics. It is one of the processes to produce input data to the Eco-DBMS. And also, through the pilot test, it was tried to take methods to measure and acquire the physiological data, and to develop data format and quantification methods for the database. In this study, a pilot test to measure the stress and the tension level, and team cognitive characteristics out of human error characteristics was performed using the human error characteristics measurement and experimental evaluation system. In an experiment measuring the stress level, physiological characteristics using EEG was measured in a simulated unexpected situation. As shown in results, although this experiment was pilot, it was validated that relevant results for evaluating human error coping effects of workers’ FFD management guidelines and unexpected situation against guidelines can be obtained. In following researches, additional experiments including other human error characteristics will be conducted. Furthermore, the human error characteristics measurement and experimental evaluation system will be utilized to validate various human error coping solutions such as human factors criteria, design, and guidelines as well as supplement the human error characteristics database.

  12. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  13. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  14. On the determinants of measurement error in time-driven costing

    NARCIS (Netherlands)

    Cardinaels, E.; Labro, E.

    2008-01-01

    Although time estimates are used extensively for costing purposes, they are prone to measurement error. In an experimental setting, we research how measurement error in time estimates varies with: (1) the level of aggregation in the definition of costing system activities (aggregated or

  15. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  16. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  17. Error analysis of thermocouple measurements in the Radiant Heat Facility

    International Nuclear Information System (INIS)

    Nakos, J.T.; Strait, B.G.

    1980-12-01

    The measurement most frequently made in the Radiant Heat Facility is temperature, and the transducer which is used almost exclusively is the thermocouple. Other methods, such as resistance thermometers and thermistors, are used but very rarely. Since a majority of the information gathered at Radiant Heat is from thermocouples, a reasonable measure of the quality of the measurements made at the facility is the accuracy of the thermocouple temperature data

  18. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  19. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    Science.gov (United States)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  20. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  1. A quantum inspired model of radar range and range-rate measurements with applications to weak value measurements

    Science.gov (United States)

    Escalante, George

    2017-05-01

    Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.

  2. Errors in anthropometric measurements in neonates and infants

    Directory of Open Access Journals (Sweden)

    D Harrison

    2001-09-01

    Full Text Available The accuracy of methods used in Cape Town hospitals and clinics for the measurement of weight, length and age in neonates and infants became suspect during a survey of 12 local authority and 5 private sector clinics in 1994-1995 (Harrison et al. 1998. A descriptive prospective study to determine the accuracy of these methods in neonates at four maternity hospitals [ 2 public and 2 private] and infants at four child health clinics of the Cape Town City Council was carried out. The main outcome measures were an assessment of three currently used methods namely to measure crown-heel length with a measuring board, a mat and a tape measure; a comparison of weight differences when an infant is fully clothed, naked and in napkin only; and the differences in age estimated by calendar dates and by a specially designed electronic calculator. The results showed that the current methods which are used to measure infants in Cape Town vary widely from one institution to another. Many measurements are inaccurate and there is a real need for uniformity and accuracy. This can only be implemented by an effective education program so as to ensure that accurate measurements are used in monitoring the health of young children in Cape Town and elsewhere.

  3. Measurement of positron range in matter in strong magnetic fields

    International Nuclear Information System (INIS)

    Hammer, B.E.; Christensen, N.L.

    1995-01-01

    Positron range is one factor that places a limitation on Positron Emission Tomography (PET) resolution. The distance a positron travels through matter before it annihilates with an electron is a function of its initial energy and the electron density of the medium. A strong magnetic field limits positron range when momentum components are transverse to the field. Measurement of positron range was determined by deconvolving the effects of detector response and radioactive distribution from the measured annihilation spread function. The annihilation spread function for a 0.5 mm bead of 68 Ga was measured with 0.2 and 1.0 mm wide slit collimators. Based on the annihilation spread function FWHM (Full Width at Half Maximum) for a 1.0 mm wide slit the median positron range in tissue equivalent material is 0.87, 0.50, 0.22 mm at 0, 5.0 and 9.4 T, respectively

  4. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  5. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  6. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    Science.gov (United States)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  7. Discrete time interval measurement system: fundamentals, resolution and errors in the measurement of angular vibrations

    International Nuclear Information System (INIS)

    Gómez de León, F C; Meroño Pérez, P A

    2010-01-01

    The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement

  8. Smartphone photography utilized to measure wrist range of motion.

    Science.gov (United States)

    Wagner, Eric R; Conti Mica, Megan; Shin, Alexander Y

    2018-02-01

    The purpose was to determine if smartphone photography is a reliable tool in measuring wrist movement. Smartphones were used to take digital photos of both wrists in 32 normal participants (64 wrists) at extremes of wrist motion. The smartphone measurements were compared with clinical goniometry measurements. There was a very high correlation between the clinical goniometry and smartphone measurements, as the concordance coefficients were high for radial deviation, ulnar deviation, wrist extension and wrist flexion. The Pearson coefficients also demonstrated the high precision of the smartphone measurements. The Bland-Altman plots demonstrated 29-31 of 32 smartphone measurements were within the 95% confidence interval of the clinical measurements for all positions of the wrists. There was high reliability between the photography taken by the volunteer and researcher, as well as high inter-observer reliability. Smartphone digital photography is a reliable and accurate tool for measuring wrist range of motion. II.

  9. Comparing objective and subjective error measures for color constancy

    NARCIS (Netherlands)

    Lucassen, M.P.; Gijsenij, A.; Gevers, T.

    2008-01-01

    We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color

  10. From Measurements Errors to a New Strain Gauge Design

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff method...

  11. Investigation of an Error Theory for Conjoint Measurement Methodology.

    Science.gov (United States)

    1983-05-01

    1ybren, 1982; Srinivasan and Shocker, 1973a, 1973b; Ullrich =d Cumins , 1973; Takane, Young, and de Leeui, 190C; Yount,, 1972’. & OEM...procedures as a diagnostic tool. Specifically, they used the oompted STRESS - value and a measure of fit they called PRECAP that could be obtained

  12. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  13. Spatial filtering velocimeter for vehicle navigation with extended measurement range

    Science.gov (United States)

    He, Xin; Zhou, Jian; Nie, Xiaoming; Long, Xingwu

    2015-05-01

    The idea of using spatial filtering velocimeter is proposed to provide accurate velocity information for vehicle autonomous navigation system. The presented spatial filtering velocimeter is based on a CMOS linear image sensor. The limited frame rate restricts high speed measurement of the vehicle. To extend measurement range of the velocimeter, a method of frequency shifting is put forward. Theoretical analysis shows that the frequency of output signal can be reduced and the measurement range can be doubled by this method when the shifting direction is set the same with that of image velocity. The approach of fast Fourier transform (FFT) is employed to obtain the power spectra of the spatially filtered signals. Because of limited frequency resolution of FFT, a frequency spectrum correction algorithm, called energy centrobaric correction, is used to improve the frequency resolution. The correction accuracy energy centrobaric correction is analyzed. Experiments are carried out to measure the moving surface of a conveyor belt. The experimental results show that the maximum measurable velocity is about 800deg/s without frequency shifting, 1600deg/s with frequency shifting, when the frame rate of the image is about 8117 Hz. Therefore, the measurement range is doubled by the method of frequency shifting. Furthermore, experiments were carried out to measure the vehicle velocity simultaneously using both the designed SFV and a laser Doppler velocimeter (LDV). The measurement results of the presented SFV are coincident with that of the LDV, but with bigger fluctuation. Therefore, it has the potential of application to vehicular autonomous navigation.

  14. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  15. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Directory of Open Access Journals (Sweden)

    Timo B Brakenhoff

    Full Text Available With the increased use of data not originally recorded for research, such as routine care data (or 'big data', measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate. For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  16. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    Science.gov (United States)

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  17. Sources of errors in the measurements of underwater profiling radiometer

    Digital Repository Service at National Institute of Oceanography (India)

    Silveira, N.; Suresh, T.; Talaulikar, M.; Desa, E.; Matondkar, S.G.P.; Lotlikar, A.

    to meet the stringent quality requirements of marine optical data for satellite ocean color sensor validation, development of algorithms and other related applications, it is very essential to take great care while measuring these parameters. There are two... of the pelican hook. The radiometer dives vertically and the cable is paid out with less tension, keeping in tandem with the descent of the radiometer while taking care to release only the required amount of cable. The operation of the release mechanism lever...

  18. Mean-Square Error Due to Gradiometer Field Measuring Devices

    Science.gov (United States)

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  19. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Laura Ruotsalainen

    2018-02-01

    Full Text Available The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU, sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF, which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is

  20. Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech

    Science.gov (United States)

    Betz, Stacy K.; Stoel-Gammon, Carol

    2005-01-01

    Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…

  1. To Error Problem Concerning Measuring Concentration of Carbon Oxide by Thermo-Chemical Sen

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2007-01-01

    Full Text Available The paper gives additional errors in respect of measuring concentration of carbon oxide by thermo-chemical sensors. A number of analytical expressions for calculation of error data and corrections for environmental factor deviations from admissible ones have been obtained in the paper

  2. About Error in Measuring Oxygen Concentration by Solid-Electrolyte Sensors

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2008-01-01

    Full Text Available The paper evaluates additional errors while measuring oxygen concentration in a gas mixture by a solid-electrolyte cell. Experimental dependences of additional errors caused by changes in temperature in a sensor zone, discharge of gas mixture supplied to a sensor zone, partial pressure in the gas mixture and fluctuations in oxygen concentrations in the air.

  3. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  4. Intrinsic measurement errors for the speed of light in vacuum

    Science.gov (United States)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  5. Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement

    Institute of Scientific and Technical Information of China (English)

    WANG Jiongqi; XIONG Kai; ZHOU Haiyin

    2012-01-01

    The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.

  6. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  7. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  8. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  9. Using surrogate biomarkers to improve measurement error models in nutritional epidemiology

    Science.gov (United States)

    Keogh, Ruth H; White, Ian R; Rodwell, Sheila A

    2013-01-01

    Nutritional epidemiology relies largely on self-reported measures of dietary intake, errors in which give biased estimated diet–disease associations. Self-reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet–disease associations. Challenges arise because there is no gold standard, and errors in self-reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet–disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7-day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self-reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet–disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self-reported measurements on observed diet–disease associations. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23553407

  10. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  11. Small Device For Short-Range Antenna Measurements Using Optics

    DEFF Research Database (Denmark)

    Yanakiev, Boyan Radkov; Nielsen, Jesper Ødum; Christensen, Morten

    2011-01-01

    This paper gives a practical solution for implementing an antenna radiation pattern measurement device using optical fibers. It is suitable for anechoic chambers as well as short range channel sounding. The device is optimized for small size and provides a cheap and easy way to make optical antenna...

  12. Measuring the relativistic perigee advance with satellite laser ranging

    CERN Document Server

    Iorio, L; Pavlis, E C

    2002-01-01

    The pericentric advance of a test body by a central mass is one of the classical tests of general relativity. Today, this effect is measured with radar ranging by the perihelion shift of Mercury and other planets in the gravitational field of the Sun, with a relative accuracy of the order of 10 sup - sup 2 -10 sup - sup 3. In this paper, we explore the possibility of a measurement of the pericentric advance in the gravitational field of Earth by analysing the laser-ranged data of some orbiting, or proposed, laser-ranged geodetic satellites. Such a measurement of the perigee advance would place limits on hypothetical, very weak, Yukawa-type components of the gravitational interaction with a finite range of the order of 10 sup 4 km. Thus, we show that, at the present level of knowledge of the orbital perturbations, the relative accuracy, achievable with suitably combined orbital elements of LAGEOS and LAGEOS II, is of the order of 10 sup - sup 3. With the corresponding measured value of (2 + 2 gamma - beta)/3, ...

  13. Recoil range distribution measurement in 20Ne + 181Ta reaction

    International Nuclear Information System (INIS)

    Tripathi, R.; Sudarshan, K.; Goswami, A.; Guin, R.; Reddy, A.V.R.

    2005-01-01

    In order to investigate linear momentum transfer in various transfer channels in 20 Ne + 181 Ta, recoil range distribution measurements have been carried out at E lab = 180 MeV, populating significant number of l-waves above l crit

  14. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    Science.gov (United States)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity

  15. An integrity measure to benchmark quantum error correcting memories

    Science.gov (United States)

    Xu, Xiaosi; de Beaudrap, Niel; O'Gorman, Joe; Benjamin, Simon C.

    2018-02-01

    Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully ‘guess’ the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the ‘best’ code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.

  16. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  17. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  18. Measurements of short-range ordering in Ni3Al

    International Nuclear Information System (INIS)

    Okamoto, J.K.; Ahn, C.C.

    1992-01-01

    This paper reports on extended electron energy-loss fine structure (EXELFS) that has been used to measure short-range ordering in Ni 3 Al. Films of fcc Ni 3 Al with suppressed short-range order synthesized by vacuum evaporation of Ni 3 Al onto room temperature substrates. EXELFS data were taken from both Al K and Ni L 23 edges. The development of short-range order was observed after the samples were annealed for various times at temperatures below 350 degrees C. Upon comparison with ab initio planewave EXELFS calculations, it was found that the Warren-Cowley short-range order parameter a(1nn) changed by about -0.1 after 210 minutes of annealing at 150 degrees C

  19. Measuring the relativistic perigee advance with satellite laser ranging

    International Nuclear Information System (INIS)

    Iorio, Lorenzo; Ciufolini, Ignazio; Pavlis, Erricos C

    2002-01-01

    The pericentric advance of a test body by a central mass is one of the classical tests of general relativity. Today, this effect is measured with radar ranging by the perihelion shift of Mercury and other planets in the gravitational field of the Sun, with a relative accuracy of the order of 10 -2 -10 -3 . In this paper, we explore the possibility of a measurement of the pericentric advance in the gravitational field of Earth by analysing the laser-ranged data of some orbiting, or proposed, laser-ranged geodetic satellites. Such a measurement of the perigee advance would place limits on hypothetical, very weak, Yukawa-type components of the gravitational interaction with a finite range of the order of 10 4 km. Thus, we show that, at the present level of knowledge of the orbital perturbations, the relative accuracy, achievable with suitably combined orbital elements of LAGEOS and LAGEOS II, is of the order of 10 -3 . With the corresponding measured value of (2 + 2γ - β)/3, by using η = 4β - γ - 3 from lunar laser ranging, we could get an estimate of the PPN parameters γ and β with an accuracy of the order of 10 -2 -10 -3 . Nevertheless, these accuracies would be substantially improved in the near future with the new Earth gravity field models by the CHAMP and GRACE missions. The use of the perigee of LARES (LAser RElativity Satellite), with a suitable combination of orbital residuals including also the node and the perigee of LAGEOS II, would also further improve the accuracy of the proposed measurement

  20. Measuring Identification and Quantification Errors in Spectral CT Material Decomposition

    Directory of Open Access Journals (Sweden)

    Aamir Younis Raja

    2018-03-01

    Full Text Available Material decomposition methods are used to identify and quantify multiple tissue components in spectral CT but there is no published method to quantify the misidentification of materials. This paper describes a new method for assessing misidentification and mis-quantification in spectral CT. We scanned a phantom containing gadolinium (1, 2, 4, 8 mg/mL, hydroxyapatite (54.3, 211.7, 808.5 mg/mL, water and vegetable oil using a MARS spectral scanner equipped with a poly-energetic X-ray source operated at 118 kVp and a CdTe Medipix3RX camera. Two imaging protocols were used; both with and without 0.375 mm external brass filter. A proprietary material decomposition method identified voxels as gadolinium, hydroxyapatite, lipid or water. Sensitivity and specificity information was used to evaluate material misidentification. Biological samples were also scanned. There were marked differences in identification and quantification between the two protocols even though spectral and linear correlation of gadolinium and hydroxyapatite in the reconstructed images was high and no qualitative segmentation differences in the material decomposed images were observed. At 8 mg/mL, gadolinium was correctly identified for both protocols, but concentration was underestimated by over half for the unfiltered protocol. At 1 mg/mL, gadolinium was misidentified in 38% of voxels for the filtered protocol and 58% of voxels for the unfiltered protocol. Hydroxyapatite was correctly identified at the two higher concentrations for both protocols, but mis-quantified for the unfiltered protocol. Gadolinium concentration as measured in the biological specimen showed a two-fold difference between protocols. In future, this methodology could be used to compare and optimize scanning protocols, image reconstruction methods, and methods for material differentiation in spectral CT.

  1. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  2. Clinical measuring system for the form and position errors of circular workpieces using optical fiber sensors

    Science.gov (United States)

    Tan, Jiubin; Qiang, Xifu; Ding, Xuemei

    1991-08-01

    Optical sensors have two notable advantages in modern precision measurement. One is that they can be used in nondestructive measurement because the sensors need not touch the surfaces of workpieces in measuring. The other one is that they can strongly resist electromagnetic interferences, vibrations, and noises, so they are suitable to be used in machining sites. But the drift of light intensity and the changing of the reflection coefficient at different measuring positions of a workpiece may have great influence on measured results. To solve the problem, a spectroscopic differential characteristic compensating method is put forward. The method can be used effectively not only in compensating the measuring errors resulted from the drift of light intensity but also in eliminating the influence to measured results caused by the changing of the reflection coefficient. Also, the article analyzes the possibility of and the means of separating data errors of a clinical measuring system for form and position errors of circular workpieces.

  3. Broadband Laser Ranging for Position Measurements in Shock Physics Experiments

    Science.gov (United States)

    Rhodes, Michelle; Bennett, Corey; Daykin, Edward; Younk, Patrick; Lalone, Brandon; Kostinski, Natalie

    2017-06-01

    Broadband laser ranging (BLR) is a recently developed measurement system that provides an attractive option for determining the position of shock-driven surfaces. This system uses broadband, picosecond (or femtosecond) laser pulses and a fiber interferometer to measure relative travel time to a target and to a reference mirror. The difference in travel time produces a delay difference between pulse replicas that creates a spectral beat frequency. The spectral beating is recorded in real time using a dispersive Fourier transform and an oscilloscope. BLR systems have been designed that measure position at 12.5-40 MHz with better than 100 micron accuracy over ranges greater than 10 cm. We will give an overview of the basic operating principles of these systems. Prepared by LLNL under Contract DE-AC52-07NA27344, by LANL under Contract DE-AC52-06NA25396, and by NSTec Contract DE-AC52-06NA25946.

  4. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  5. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  6. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  7. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran; Mallick, Bani K.; Kipnis, Victor; Carroll, Raymond J.

    2009-01-01

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values

  8. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  9. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  10. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  11. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  12. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  13. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  14. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    Science.gov (United States)

    2016-03-01

    CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS WITH MEASUREMENTS OF FORECAST UNCERTAINTY by Nicholas M. Chisler March 2016 Thesis Advisor...March 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE RELATING TROPICAL CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS...WITH MEASUREMENTS OF FORECAST UNCERTAINTY 5. FUNDING NUMBERS 6. AUTHOR(S) Nicholas M. Chisler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  15. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  16. Lifetime measurements in the picosecond range: Achievements and Perspectives

    International Nuclear Information System (INIS)

    Kruecken, Reiner

    1999-01-01

    This contribution will review the recoil distance method (RDM), its current range of applications as well as future perspectives for the measurement of lifetimes in the picosecond range of excited nuclear levels. Recent Doppler-shift lifetime experiments with large gamma-ray spectrometers have achieved a new level of precision and sensitivity, providing new insights into nuclear structure physics. High precision RDM measurements of near-yrast states in various mass regions have revealed dynamic shape effects beyond the framework of collective models and have also allowed to study the interaction between coexisting shapes. The measurement of lifetimes in superdeformed bands has shown that lifetimes can be measured for nuclear excitations, which are only populated with a few percent of the production cross-section of a nucleus. These experiments have also enabled us to study the mechanism of the decay-out of superdeformed bands. Another example for the need of precise lifetime measurements is the recent verifications of the concept of 'magnetic rotation' in nuclei by the experimental observation of the characteristic drop of B(M1) values as a function of angular momentum. These recent breakthroughs have also opened new perspectives for the use of the RDM technique for more exotic regions of nuclei and nuclear excitations. Here the measurement of lifetimes in neutron rich nuclei, which are not accessible with conventional nuclear reactions using stable beams and targets, is of special interest. Possible experimental approaches and simple estimates for the feasibility of such experiments will be presented. (author)

  17. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  18. Non-linear quantization error reduction for the temperature measurement subsystem on-board LISA Pathfinder

    Science.gov (United States)

    Sanjuan, J.; Nofrarias, M.

    2018-04-01

    Laser Interferometer Space Antenna (LISA) Pathfinder is a mission to test the technology enabling gravitational wave detection in space and to demonstrate that sub-femto-g free fall levels are possible. To do so, the distance between two free falling test masses is measured to unprecedented sensitivity by means of laser interferometry. Temperature fluctuations are one of the noise sources limiting the free fall accuracy and the interferometer performance and need to be known at the ˜10 μK Hz-1/2 level in the sub-millihertz frequency range in order to validate the noise models for the future space-based gravitational wave detector LISA. The temperature measurement subsystem on LISA Pathfinder is in charge of monitoring the thermal environment at key locations with noise levels of 7.5 μK Hz-1/2 at the sub-millihertz. However, its performance worsens by one to two orders of magnitude when slowly changing temperatures are measured due to errors introduced by analog-to-digital converter non-linearities. In this paper, we present a method to reduce this effect by data post-processing. The method is applied to experimental data available from on-ground validation tests to demonstrate its performance and the potential benefit for in-flight data. The analog-to-digital converter effects are reduced by a factor between three and six in the frequencies where the errors play an important role. An average 2.7 fold noise reduction is demonstrated in the 0.3 mHz-2 mHz band.

  19. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  20. A portable non-contact displacement sensor and its application of lens centration error measurement

    Science.gov (United States)

    Yu, Zong-Ru; Peng, Wei-Jei; Wang, Jung-Hsing; Chen, Po-Jui; Chen, Hua-Lin; Lin, Yi-Hao; Chen, Chun-Cheng; Hsu, Wei-Yao; Chen, Fong-Zhi

    2018-02-01

    We present a portable non-contact displacement sensor (NCDS) based on astigmatic method for micron displacement measurement. The NCDS are composed of a collimated laser, a polarized beam splitter, a 1/4 wave plate, an aspheric objective lens, an astigmatic lens and a four-quadrant photodiode. A visible laser source is adopted for easier alignment and usage. The dimension of the sensor is limited to 115 mm x 36 mm x 56 mm, and a control box is used for dealing with signal and power control between the sensor and computer. The NCDS performs micron-accuracy with +/-30 μm working range and the working distance is constrained in few millimeters. We also demonstrate the application of the NCDS for lens centration error measurement, which is similar to the total indicator runout (TIR) or edge thickness difference (ETD) of a lens measurement using contact dial indicator. This application has advantage for measuring lens made in soft materials that would be starched by using contact dial indicator.

  1. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  2. Errors in measuring transverse and energy jitter by beam position monitors

    Energy Technology Data Exchange (ETDEWEB)

    Balandin, V.; Decking, W.; Golubeva, N.

    2010-02-15

    The problem of errors, arising due to finite BPMresolution, in the difference orbit parameters, which are found as a least squares fit to the BPM data, is one of the standard and important problems of accelerator physics. Even so for the case of transversely uncoupled motion the covariance matrix of reconstruction errors can be calculated ''by hand'', the direct usage of obtained solution, as a tool for designing of a ''good measurement system'', does not look to be fairly straightforward. It seems that a better understanding of the nature of the problem is still desirable. We make a step in this direction introducing dynamic into this problem, which at the first glance seems to be static. We consider a virtual beam consisting of virtual particles obtained as a result of application of reconstruction procedure to ''all possible values'' of BPM reading errors. This beam propagates along the beam line according to the same rules as any real beam and has all beam dynamical characteristics, such as emittances, energy spread, dispersions, betatron functions and etc. All these values become the properties of the BPM measurement system. One can compare two BPM systems comparing their error emittances and rms error energy spreads, or, for a given measurement system, one can achieve needed balance between coordinate and momentum reconstruction errors by matching the error betatron functions in the point of interest to the desired values. (orig.)

  3. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  4. Errors in measuring transverse and energy jitter by beam position monitors

    International Nuclear Information System (INIS)

    Balandin, V.; Decking, W.; Golubeva, N.

    2010-02-01

    The problem of errors, arising due to finite BPMresolution, in the difference orbit parameters, which are found as a least squares fit to the BPM data, is one of the standard and important problems of accelerator physics. Even so for the case of transversely uncoupled motion the covariance matrix of reconstruction errors can be calculated ''by hand'', the direct usage of obtained solution, as a tool for designing of a ''good measurement system'', does not look to be fairly straightforward. It seems that a better understanding of the nature of the problem is still desirable. We make a step in this direction introducing dynamic into this problem, which at the first glance seems to be static. We consider a virtual beam consisting of virtual particles obtained as a result of application of reconstruction procedure to ''all possible values'' of BPM reading errors. This beam propagates along the beam line according to the same rules as any real beam and has all beam dynamical characteristics, such as emittances, energy spread, dispersions, betatron functions and etc. All these values become the properties of the BPM measurement system. One can compare two BPM systems comparing their error emittances and rms error energy spreads, or, for a given measurement system, one can achieve needed balance between coordinate and momentum reconstruction errors by matching the error betatron functions in the point of interest to the desired values. (orig.)

  5. High-temperature absorbed dose measurements in the megagray range

    International Nuclear Information System (INIS)

    Balian, P.; Ardonceau, J.; Zuppiroli, L.

    1988-01-01

    Organic conductors of the tetraselenotetracene family have been tested as ''high-temperature'' absorbed dose dosimeters. They were heated up to 120 0 C and irradiated at this temperature with 1-MeV electrons in order to simulate, in a short time, a much longer γ-ray irradiation. The electric resistance increase of the crystal can be considered a good measurement of the absorbed dose in the range 10 6 Gy to a few 10 8 Gy and presumably one order of magnitude more. This dosimeter also permits on-line (in-situ) measurements of the absorbed dose without removing the sensor from the irradiation site. The respective advantages of organic and inorganic dosimeters at these temperature and dose ranges are also discussed. In this connection, we outline new, but negative, results concerning the possible use of silica as a high-temperature, high-dose dosimeter. (author)

  6. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    Science.gov (United States)

    Biggs, Adam T

    2017-07-01

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  7. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  8. A first look at measurement error on FIA plots using blind plots in the Pacific Northwest

    Science.gov (United States)

    Susanna Melson; David Azuma; Jeremy S. Fried

    2002-01-01

    Measurement error in the Forest Inventory and Analysis work of the Pacific Northwest Station was estimated with a recently implemented blind plot measurement protocol. A small subset of plots was revisited by a crew having limited knowledge of the first crew's measurements. This preliminary analysis of the first 18 months' blind plot data indicates that...

  9. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...

  10. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  11. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    Science.gov (United States)

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey

    2014-05-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. © 2014 Diabetes Technology Society.

  12. An overview of Broadband Laser Ranging Architecture and Measurement Considerations

    Science.gov (United States)

    Daykin, Edward; La Lone, Brandon; Miller, Edward; Younk, Patrick; Bennett, Corey; Catenacci, Jared; LLNL BLR Development Group Collaboration; LANL BLR Development Group Collaboration

    2017-06-01

    Broadband Laser Ranging (BLR) is a developmental diagnostic intended to measure the position of rapidly moving surfaces in combination with optical velocimetry. Design and employment of a BLR diagnostic on dynamic experiments requires consideration for both the inherent measurement system tradeoffs as well as architectural choices appropriate to the nature of investigation. The diagnostic uses spectral interferometry to measure distance by mapping femtosecond laser pulses to the time domain via chromatic dispersion within the fiber-optic architecture. The system parameters and governing equations that describe measurement range, resolution, and Doppler sensitivity will be discussed. We will also briefly review the impact of diagnostic architectural choices including: nature of interferometer, Interferometric dispersion matching, optical amplification, integration of optical velocimetry, BLR calibration, and field operability. To summarize we will present the architectural and operational approach currently being pursued by NSTec within an on-going collaboration between NSTec, Lawrence Livermore and Los Alamos National Labs. This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy.

  13. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  14. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  15. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  16. Efficacy of the Yumeiho therapy massage on Repositioning error, Range of motion trunk Flexation and functional power in women volleyball players with Hyper lordosis

    Directory of Open Access Journals (Sweden)

    Yousef yarahmadi

    2018-03-01

    Conclusion: results showed that the effect of Yumeiho therapy massage on repositioning error, Flexation range of motion trunk and functional power had a significant. It therapists recommended to include Yumeiho therapy massage in order to enhance these variables.

  17. MTF measurement of IR optics in different temperature ranges

    Science.gov (United States)

    Bai, Alexander; Duncker, Hannes; Dumitrescu, Eugen

    2017-10-01

    Infrared (IR) optical systems are at the core of many military, civilian and manufacturing applications and perform mission critical functions. To reliably fulfill the demanding requirements imposed on today's high performance IR optics, highly accurate, reproducible and fast lens testing is of crucial importance. Testing the optical performance within different temperature ranges becomes key in many military applications. Due to highly complex IR-Applications in the fields of aerospace, military and automotive industries, MTF Measurement under realistic environmental conditions become more and more relevant. A Modulation Transfer Function (MTF) test bench with an integrated thermal chamber allows measuring several sample sizes in a temperature range from -40 °C to +120°C. To reach reliable measurement results under these difficult conditions, a specially developed temperature stable design including an insulating vacuum are used. The main function of this instrument is the measurement of the MTF both on- and off-axis at up to +/-70° field angle, as well as measurement of effective focal length, flange focal length and distortion. The vertical configuration of the system guarantees a small overall footprint. By integrating a high-resolution IR camera with focal plane array (FPA) in the detection unit, time consuming measurement procedures such as scanning slit with liquid nitrogen cooled detectors can be avoided. The specified absolute accuracy of +/- 3% MTF is validated using internationally traceable reference optics. Together with a complete and intuitive software solution, this makes the instrument a turn-key device for today's state-of- the-art optical testing.

  18. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  19. An Observability Metric for Underwater Vehicle Localization Using Range Measurements

    Directory of Open Access Journals (Sweden)

    Filippo Arrichiello

    2013-11-01

    Full Text Available The paper addresses observability issues related to the general problem of single and multiple Autonomous Underwater Vehicle (AUV localization using only range measurements. While an AUV is submerged, localization devices, such as Global Navigation Satellite Systems, are ineffective, due to the attenuation of electromagnetic waves. AUV localization based on dead reckoning techniques and the use of affordable motion sensor units is also not practical, due to divergence caused by sensor bias and drift. For these reasons, localization systems often build on trilateration algorithms that rely on the measurements of the ranges between an AUV and a set of fixed transponders using acoustic devices. Still, such solutions are often expensive, require cumbersome calibration procedures and only allow for AUV localization in an area that is defined by the geometrical arrangement of the transponders. A viable alternative for AUV localization that has recently come to the fore exploits the use of complementary information on the distance from the AUV to a single transponder, together with information provided by on-board resident motion sensors, such as, for example, depth, velocity and acceleration measurements. This concept can be extended to address the problem of relative localization between two AUVs equipped with acoustic sensors for inter-vehicle range measurements. Motivated by these developments, in this paper, we show that both the problems of absolute localization of a single vehicle and the relative localization of multiple vehicles can be treated using the same mathematical framework, and tailoring concepts of observability derived for nonlinear systems, we analyze how the performance in localization depends on the types of motion imparted to the AUVs. For this effect, we propose a well-defined observability metric and validate its usefulness, both in simulation and by carrying out experimental tests with a real marine vehicle during which the

  20. An in-process form error measurement system for precision machining

    International Nuclear Information System (INIS)

    Gao, Y; Huang, X; Zhang, Y

    2010-01-01

    In-process form error measurement for precision machining is studied. Due to two key problems, opaque barrier and vibration, the study of in-process form error optical measurement for precision machining has been a hard topic and so far very few existing research works can be found. In this project, an in-process form error measurement device is proposed to deal with the two key problems. Based on our existing studies, a prototype system has been developed. It is the first one of the kind that overcomes the two key problems. The prototype is based on a single laser sensor design of 50 nm resolution together with two techniques, a damping technique and a moving average technique, proposed for use with the device. The proposed damping technique is able to improve vibration attenuation by up to 21 times compared to the case of natural attenuation. The proposed moving average technique is able to reduce errors by seven to ten times without distortion to the form profile results. The two proposed techniques are simple but they are especially useful for the proposed device. For a workpiece sample, the measurement result under coolant condition is only 2.5% larger compared with the one under no coolant condition. For a certified Wyko test sample, the overall system measurement error can be as low as 0.3 µm. The measurement repeatability error can be as low as 2.2%. The experimental results give confidence in using the proposed in-process form error measurement device. For better results, further improvement in design and tests are necessary

  1. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  2. An extended set-value observer for position estimation using single range measurements

    DEFF Research Database (Denmark)

    Marcal, Jose; Jouffroy, Jerome; Fossen, Thor I.

    the observability of the system is briefly discussed and an extended set-valued observer is presented, with some discussion about the effect of the measurements noise on the final solution. This observer estimates bounds in the errors assuming that the exogenous signals are bounded, providing a safe region......The ability of estimating the position of an underwater vehicle from single range measurements is important in applications where one transducer marks an important geographical point, when there is a limitation in the size or cost of the vehicle, or when there is a failure in a system...... of transponders. The knowledge of the bearing of the vehicle and the range measurements from a single location can provide a solution which is sensitive to the trajectory that the vehicle is following, since there is no complete constraint on the position estimate with a single beacon. In this paper...

  3. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  4. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Using Generalizability Theory to Disattenuate Correlation Coefficients for Multiple Sources of Measurement Error.

    Science.gov (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-05-02

    Over the years, research in the social sciences has been dominated by reporting of reliability coefficients that fail to account for key sources of measurement error. Use of these coefficients, in turn, to correct for measurement error can hinder scientific progress by misrepresenting true relationships among the underlying constructs being investigated. In the research reported here, we addressed these issues using generalizability theory (G-theory) in both traditional and new ways to account for the three key sources of measurement error (random-response, specific-factor, and transient) that affect scores from objectively scored measures. Results from 20 widely used measures of personality, self-concept, and socially desirable responding showed that conventional indices consistently misrepresented reliability and relationships among psychological constructs by failing to account for key sources of measurement error and correlated transient errors within occasions. The results further revealed that G-theory served as an effective framework for remedying these problems. We discuss possible extensions in future research and provide code from the computer package R in an online supplement to enable readers to apply the procedures we demonstrate to their own research.

  6. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  7. Linear and nonlinear magnetic error measurements using action and phase jump analysis

    Directory of Open Access Journals (Sweden)

    Javier F. Cardona

    2009-01-01

    Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.

  8. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  9. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  10. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  11. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  13. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  14. Quantitative shearography: error reduction by using more than three measurement channels

    International Nuclear Information System (INIS)

    Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.

    2011-01-01

    Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.

  15. Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients

    DEFF Research Database (Denmark)

    Müller, Monika; Biurrun Manresa, José; Limacher, Andreas

    2017-01-01

    BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...

  16. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  17. Assessing thermally induced errors of machine tools by 3D length measurements

    NARCIS (Netherlands)

    Florussen, G.H.J.; Delbressine, F.L.M.; Schellekens, P.H.J.

    2003-01-01

    A new measurement technique is proposed for the assessment of thermally induced errors of machine tools. The basic idea is to measure changes of length by a telescopic double ball bar (TDEB) at multiple locations in the machine's workspace while the machine is thermally excited. In addition thermal

  18. Measuring Systems for Thermometer Calibration in Low-Temperature Range

    Science.gov (United States)

    Szmyrka-Grzebyk, A.; Lipiński, L.; Manuszkiewicz, H.; Kowal, A.; Grykałowska, A.; Jancewicz, D.

    2011-12-01

    The national temperature standard for the low-temperature range between 13.8033 K and 273.16 K has been established in Poland at the Institute of Low Temperature and Structure Research (INTiBS). The standard consists of sealed cells for realization of six fixed points of the International Temperature Scale of 1990 (ITS-90) in the low-temperature range, an adiabatic cryostat and Isotech water and mercury triple-point baths, capsule standard resistance thermometers (CSPRT), and AC and DC bridges with standard resistors for thermometers resistance measurements. INTiBS calibrates CSPRTs at the low-temperature fixed points with uncertainties less than 1 mK. In lower temperature range—between 2.5 K and about 25 K — rhodium-iron (RhFe) resistance thermometers are calibrated by comparison with a standard which participated in the EURAMET.T-K1.1 comparison. INTiBS offers a calibration service for industrial platinum resistance thermometers and for digital thermometers between 77 K and 273 K. These types of thermometers may be calibrated at INTiBS also in a higher temperature range up to 550°C. The Laboratory of Temperature Standard at INTiBS acquired an accreditation from the Polish Centre for Accreditation. A management system according to EN ISO/IEC 17025:2005 was established at the Laboratory and presented on EURAMET QSM Forum.

  19. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    OpenAIRE

    Thomas, Felicity Louise; Signal, Mathew; Harris, Deborah L.; Weston, Philip J.; Harding, Jane E.; Shaw, Geoffrey M.; Chase, J. Geoffrey

    2014-01-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia me...

  20. Quantifying the potential impact of measurement error in an investigation of autism spectrum disorder (ASD).

    Science.gov (United States)

    Heavner, Karyn; Newschaffer, Craig; Hertz-Picciotto, Irva; Bennett, Deborah; Burstyn, Igor

    2014-05-01

    The Early Autism Risk Longitudinal Investigation (EARLI), an ongoing study of a risk-enriched pregnancy cohort, examines genetic and environmental risk factors for autism spectrum disorders (ASDs). We simulated the potential effects of both measurement error (ME) in exposures and misclassification of ASD-related phenotype (assessed as Autism Observation Scale for Infants (AOSI) scores) on measures of association generated under this study design. We investigated the impact on the power to detect true associations with exposure and the false positive rate (FPR) for a non-causal correlate of exposure (X2, r=0.7) for continuous AOSI score (linear model) versus dichotomised AOSI (logistic regression) when the sample size (n), degree of ME in exposure, and strength of the expected (true) OR (eOR)) between exposure and AOSI varied. Exposure was a continuous variable in all linear models and dichotomised at one SD above the mean in logistic models. Simulations reveal complex patterns and suggest that: (1) There was attenuation of associations that increased with eOR and ME; (2) The FPR was considerable under many scenarios; and (3) The FPR has a complex dependence on the eOR, ME and model choice, but was greater for logistic models. The findings will stimulate work examining cost-effective strategies to reduce the impact of ME in realistic sample sizes and affirm the importance for EARLI of investment in biological samples that help precisely quantify a wide range of environmental exposures.

  1. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  2. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  3. Model-based cartilage thickness measurement in the submillimeter range

    International Nuclear Information System (INIS)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.; Wee, R. ter; Venema, H. W.

    2007-01-01

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

  4. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  6. The Influence of Training Phase on Error of Measurement in Jump Performance.

    Science.gov (United States)

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.

  7. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  8. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  9. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement.

    Science.gov (United States)

    Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin

    2016-04-04

    It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation.

  10. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Novel birefringence interrogation for Sagnac loop interferometer sensor with unlimited linear measurement range.

    Science.gov (United States)

    He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan

    2017-03-20

    A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.

  12. Generalized weighted ratio method for accurate turbidity measurement over a wide range.

    Science.gov (United States)

    Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying

    2015-12-14

    Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.

  13. Simultaneous Treatment of Missing Data and Measurement Error in HIV Research Using Multiple Overimputation.

    Science.gov (United States)

    Schomaker, Michael; Hogger, Sara; Johnson, Leigh F; Hoffmann, Christopher J; Bärnighausen, Till; Heumann, Christian

    2015-09-01

    Both CD4 count and viral load in HIV-infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of ln CD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example, we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Multiple overimputation emphasizes more strongly the influence of having high baseline CD4 counts compared to both a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm vs. <25 cells/mm: 0.21 [95% confidence interval: 0.18, 0.24] vs. 0.38 [0.29, 0.48], and 0.29 [0.25, 0.34], respectively). Similar results are obtained when varying assumptions about measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research.

  14. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  15. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  16. Alpha Beam Energy Determination Using a Range Measuring Device for Radioisotope Production

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Yong; Kim, Byeon Gil; Hong, Seung Pyo; Kim, Ran Young; Chun, Kwon Soo [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2016-05-15

    The threshold energy of the {sup 209}Bi(α,3n){sup 210} At reaction is at about 30MeV. Our laboratory suggested an energy measurement method to confirm the proton-beam's energy by using a range measurement device. The experiment was performed energy measurement of alpha beam. The alpha beam of energy 29 MeV has been extracted from the cyclotron for the production of {sup 211}At. This device was composed of four parts: an absorber, a drive shaft, and a servo motor and a Faraday cup. The drive shaft was mounted on the absorber and connects with the axis of the servo motor and rotates linearly and circularly by this servo motor. A Faraday cup is for measuring the beam flux. As this drive shaft rotates, the thickness of the absorber varies depending on the rotation angle of the absorber. The energy of the alpha particle accelerated and extracted from MC-50 cyclotron was calculated with the measurement of the particle range in Al foil and using ASTAR, SRIM, MCNPX software. There were a little discrepancy between the expected energy and the calculated energy within the 0.5MeV error range. We have a plan to make an experiment with various alpha particle energies and another methodology, for example, the cross section measurement of the nuclear reaction.

  17. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  18. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  19. Lifetime measurements in the picosecond range: achievements and perspectives

    International Nuclear Information System (INIS)

    Kruecken, R.

    2000-01-01

    Recent developments in the measurement of lifetimes in the picosecond range using the recoil distance method (RDM) are reviewed. Results from recent RDM experiments on superdeformed bands in the mass-190 region, shears, bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. New experimental devices for lifetime experiments at Yale, such as the New Yale Plunger Device (N.Y.P.D.), the SPEctrometer for Doppler-shift Experiments at Yale (SPEEDY) and the plans for the gas-filled recoil separator SASSYER are presented. Perspectives for the use of the RDM technique in the study of exotic nuclei and its potential use with radioactive beams are discussed. (author)

  20. Picosecond X-ray streak camera dynamic range measurement

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.; Gontier, D.; Raimbourg, J.; Rubbelynck, C.; Trosseille, C. [CEA, DAM, DIF, F-91297 Arpajon (France); Fronty, J.-P.; Goulmy, C. [Photonis SAS, Avenue Roger Roncier, BP 520, 19106 Brive Cedex (France)

    2016-09-15

    Streak cameras are widely used to record the spatio-temporal evolution of laser-induced plasma. A prototype of picosecond X-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to answer the Laser MegaJoule specific needs. The dynamic range of this instrument is measured with picosecond X-ray pulses generated by the interaction of a laser beam and a copper target. The required value of 100 is reached only in the configurations combining the slowest sweeping speed and optimization of the streak tube electron throughput by an appropriate choice of high voltages applied to its electrodes.

  1. Measurement of peak impact loads differ between accelerometers - Effects of system operating range and sampling rate.

    Science.gov (United States)

    Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C

    2017-06-14

    A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  2. AN INDUCTION SENSOR FOR MEASURING CURRENTS OF NANOSECOND RANGE

    Directory of Open Access Journals (Sweden)

    S. P. Shalamov

    2016-11-01

    Full Text Available Purpose. A current meter based on the principle of electromagnetic induction is designed to register the current flowing in the rod lightning. The aim of the article is to describe the way of increasing the sensitivity of the converter by means of their serial communication. Methodology. The recorded current is in the nanosecond range. If compared with other methods, meters based on the principle of electromagnetic induction have several advantages, such as simplicity of construction, reliability, low cost, no need in a power source, relatively high sensitivity. Creation of such a meter is necessary, because in some cases there is no possibility to use a shunt. Transient properties of a meter are determined by the number of turns and the constant of integration. Sensitivity is determined by measuring the number of turns, the coil sectional area, the core material and the integration constant. For measuring the magnetic field pulses with a rise time of 5 ns to 50 ns a meter has turns from 5 to 15. The sensitivity of such a meter is low. When the number of turns is increased, the output signal and the front increase. Earlier described dependencies were used to select the main parameters of the converter. It was based on generally accepted and widely known equivalent circuit. The experience of created earlier pulse magnetic field meters was considered both for measuring the magnetic fields, and large pulse current. Originality. Series connection of converters has the property of a long line. The level of the transient response of the meter is calculated. The influence of parasitic parameters on the type of meter transient response is examined. The shown construction was not previously described. Practical value. The results of meter implementation are given. The design peculiarities of the given measuring instruments are shown.

  3. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  4. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  5. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...

  6. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  7. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    International Nuclear Information System (INIS)

    Alcock, Simon G.; Nistea, Ioana; Sawhney, Kawal

    2016-01-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  8. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  9. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  10. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  11. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  12. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  13. Measured long-range repulsive Casimir–Lifshitz forces

    Science.gov (United States)

    Munday, J. N.; Capasso, Federico; Parsegian, V. Adrian

    2014-01-01

    Quantum fluctuations create intermolecular forces that pervade macroscopic bodies1–3. At molecular separations of a few nanometres or less, these interactions are the familiar van der Waals forces4. However, as recognized in the theories of Casimir, Polder and Lifshitz5–7, at larger distances and between macroscopic condensed media they reveal retardation effects associated with the finite speed of light. Although these long-range forces exist within all matter, only attractive interactions have so far been measured between material bodies8–11. Here we show experimentally that, in accord with theoretical prediction12, the sign of the force can be changed from attractive to repulsive by suitable choice of interacting materials immersed in a fluid. The measured repulsive interaction is found to be weaker than the attractive. However, in both cases the magnitude of the force increases with decreasing surface separation. Repulsive Casimir–Lifshitz forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction13–15. PMID:19129843

  14. Measured long-range repulsive Casimir-Lifshitz forces.

    Science.gov (United States)

    Munday, J N; Capasso, Federico; Parsegian, V Adrian

    2009-01-08

    Quantum fluctuations create intermolecular forces that pervade macroscopic bodies. At molecular separations of a few nanometres or less, these interactions are the familiar van der Waals forces. However, as recognized in the theories of Casimir, Polder and Lifshitz, at larger distances and between macroscopic condensed media they reveal retardation effects associated with the finite speed of light. Although these long-range forces exist within all matter, only attractive interactions have so far been measured between material bodies. Here we show experimentally that, in accord with theoretical prediction, the sign of the force can be changed from attractive to repulsive by suitable choice of interacting materials immersed in a fluid. The measured repulsive interaction is found to be weaker than the attractive. However, in both cases the magnitude of the force increases with decreasing surface separation. Repulsive Casimir-Lifshitz forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction.

  15. Ionospheric Coherence Bandwidth Measurements in the Lower VHF Frequency Range

    Science.gov (United States)

    Suszcynsky, D. M.; Light, M. E.; Pigue, M. J.

    2015-12-01

    The United States Department of Energy's Radio Frequency Propagation (RFProp) experiment consists of a satellite-based radio receiver suite to study various aspects of trans-ionospheric signal propagation and detection in four frequency bands, 2 - 55 MHz, 125 - 175 MHz, 365 - 415 MHz and 820 - 1100 MHz. In this paper, we present simultaneous ionospheric coherence bandwidth and S4 scintillation index measurements in the 32 - 44 MHz frequency range collected during the ESCINT equatorial scintillation experiment. 40-MHz continuous wave (CW) and 32 - 44 MHz swept frequency signals were transmitted simultaneously to the RFProp receiver suite from the Reagan Test Site at Kwajalein Atoll in the Marshall Islands (8.7° N, 167.7° E) in three separate campaigns during the 2014 and 2015 equinoxes. Results show coherence bandwidths as small as ~ 1 kHz for strong scintillation (S4 > 0.7) and indicate a high degree of ionospheric variability and irregularity on 10-m spatial scales. Spread-Doppler clutter effects arising from preferential ray paths to the satellite due to refraction off of isolated density irregularities are also observed and are dominant at low elevation angles. The results are compared to previous measurements and available scaling laws.

  16. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    Science.gov (United States)

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  17. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    Science.gov (United States)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by

  18. Reliability and measurement error of sagittal spinal motion parameters in 220 patients with chronic low back pain using a three-dimensional measurement device.

    Science.gov (United States)

    Mieritz, Rune M; Bronfort, Gert; Jakobsen, Markus D; Aagaard, Per; Hartvigsen, Jan

    2014-09-01

    A basic premise for any instrument measuring spinal motion is that reliable outcomes can be obtained on a relevant sample under standardized conditions. The purpose of this study was to assess the overall reliability and measurement error of regional spinal sagittal plane motion in patients with chronic low back pain (LBP), and then to evaluate the influence of body mass index, examiner, gender, stability of pain, and pain distribution on reliability and measurement error. This study comprises a test-retest design separated by 7 to 14 days. The patient cohort consisted of 220 individuals with chronic LBP. Kinematics of the lumbar spine were sampled during standardized spinal extension-flexion testing using a 6-df instrumented spatial linkage system. Test-retest reliability and measurement error were evaluated using interclass correlation coefficients (ICC(1,1)) and Bland-Altman limits of agreement (LOAs). The overall test-retest reliability (ICC(1,1)) for various motion parameters ranged from 0.51 to 0.70, and relatively wide LOAs were observed for all parameters. Reliability measures in patient subgroups (ICC(1,1)) ranged between 0.34 and 0.77. In general, greater (ICC(1,1)) coefficients and smaller LOAs were found in subgroups with patients examined by the same examiner, patients with a stable pain level, patients with a body mass index less than below 30 kg/m(2), patients who were men, and patients in the Quebec Task Force classifications Group 1. This study shows that sagittal plane kinematic data from patients with chronic LBP may be sufficiently reliable in measurements of groups of patients. However, because of the large LOAs, this test procedure appears unusable at the individual patient level. Furthermore, reliability and measurement error varies substantially among subgroups of patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  20. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators.

    Science.gov (United States)

    Melnychuk, O; Grassellino, A; Romanenko, A

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  1. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    Science.gov (United States)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  2. Technical Note: Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry

    International Nuclear Information System (INIS)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frederic

    2010-01-01

    Purpose: The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Methods: Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side (''face up'' or ''face down'') were simulated. Results: Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%--3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Conclusions: Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  3. Analysis of interactive fixed effects dynamic linear panel regression with measurement error

    OpenAIRE

    Nayoung Lee; Hyungsik Roger Moon; Martin Weidner

    2011-01-01

    This paper studies a simple dynamic panel linear regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method.

  4. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  5. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.

    Science.gov (United States)

    Fritz, Matthew S; Kenny, David A; MacKinnon, David P

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.

  6. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  7. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...

  8. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  9. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    Science.gov (United States)

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  10. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    Science.gov (United States)

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  11. Feasibility of RACT for 3D dose measurement and range verification in a water phantom.

    Science.gov (United States)

    Alsanea, Fahed; Moskvin, Vadim; Stantz, Keith M

    2015-02-01

    The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly impact beam commissioning, treatment verification during particle beam therapy and image guided techniques.

  12. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    Science.gov (United States)

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  13. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  14. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    International Nuclear Information System (INIS)

    Du, Z C; Lv, C F; Hong, M S

    2006-01-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'

  15. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position...... errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...

  16. 10.23  Mcps laser pseudo-code ranging system with 0.33  mm (1σ) pseudo-range measurement precision.

    Science.gov (United States)

    Yu, Xiaonan; Tong, Shoufeng; Zhang, Lei; Dong, Yan; Zhao, Xin; Qiao, Yue

    2017-07-01

    The inter-satellite laser link is the backbone of the next inter-satellite information network, and ranging and communication are the main functions of the inter-satellite laser link. This study focuses on the inter-satellite laser ranging based on the pseudo-code correlation technology. In this paper, several typical laser-ranging methods have been compared and we determined that the laser pseudo-code ranging architecture is more suitable for the inter-satellite laser communication link. The pseudo-code ranging system is easy to combine with a digital communication system, and we used it to calculate integer ambiguity by modulating the time information. The main challenge of the ranging system is range precision, which is the main focus of this paper. First, the framework of the pseudo-code ranging system is introduced; the ranging architecture of dual one-way ranging is used to eliminate the clock error between the two transceivers, and then the uncertainty of the phase detector is analyzed. In the analysis, the carrier to noise ratio and the ranging code rate are constrained by the laser communication link margin and the electronic hardware limitation. Therefore, the relationship between the sampling depth and the phase detector uncertainty is verified. A series of optical fiber channel laser pseudo-code ranging experiments demonstrated the effects of sampling depth on the ranging precision. By adjusting the depth of storage, such as the depth of 1.6 Mb, we obtained a pseudo-range measurement precision of 0.33 mm (1σ), which is equivalent to 0.0001 times code subdivision of 10.23 Mcps pseudo-code. This paper has achieved high precision in a pseudo-range measurements, which is the foundation of the inter-satellite laser link.

  17. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  18. Thin film thickness measurement error reduction by wavelength selection in spectrophotometry

    International Nuclear Information System (INIS)

    Tsepulin, Vladimir G; Perchik, Alexey V; Tolstoguzov, Victor L; Karasik, Valeriy E

    2015-01-01

    Fast and accurate volumetric profilometry of thin film structures is an important problem in the electronic visual display industry. We propose to use spectrophotometry with a limited number of working wavelengths to achieve high-speed control and an approach to selecting the optimal working wavelengths to reduce the thickness measurement error. A simple expression for error estimation is presented and tested using a Monte Carlo simulation. The experimental setup is designed to confirm the stability of film thickness determination using a limited number of wavelengths

  19. Physical measurements for ion range verification in charged particle therapy

    International Nuclear Information System (INIS)

    Testa, M.

    2010-10-01

    This PhD thesis reports on the experimental investigation of the prompt photons created during the fragmentation of the carbon beam used in particle therapy. Two series of experiments have been performed at the GANIL and GSI facilities with 95 MeV/u and 305 MeV/u 12 C 6+ ion beams stopped in PMMA and water phantoms. In both experiments a clear correlation was obtained between the C-ion range and the prompt photon profile. A major issue of these measurements is the discrimination between the prompt photon signal (which is correlated with the ion path) and a vast neutron background uncorrelated with the Bragg-Peak position. Two techniques are employed to allow for this photon-neutron discrimination: the time-of-flight (TOF) and the pulse-shape-discrimination (PSD). The TOF technique allowed demonstrating the correlation of the prompt photon production and the primary ion path while the PSD technique brought great insights to better understand the photon and neutron contribution in TOF spectra. In this work we demonstrated that a collimated set-up detecting prompt photons by means of TOF measurements, could allow real-time control of the longitudinal position of the Bragg-peak under clinical conditions. In the second part of the PhD thesis a simulation study was performed with Geant4 Monte Carlo code to assess the influence of the main design parameters on the efficiency and spatial resolution achievable with a multidetector and multi-collimated Prompt Gamma Camera. Several geometrical configurations for both collimators and stack of detectors have been systematically studied and the considerations on the main design constraints are reported. (author)

  20. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    International Nuclear Information System (INIS)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.

    1988-01-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg 2+ (pH 7.0) to 7.0 mM Mg 2+ (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study

  2. A study of the effect of measurement error in predictor variables in nondestructive assay

    International Nuclear Information System (INIS)

    Burr, Tom L.; Knepper, Paula L.

    2000-01-01

    It is not widely known that ordinary least squares estimates exhibit bias if there are errors in the predictor variables. For example, enrichment measurements are often fit to two predictors: Poisson-distributed count rates in the region of interest and in the background. Both count rates have at least random variation due to counting statistics. Therefore, the parameter estimates will be biased. In this case, the effect of bias is a minor issue because there is almost no interest in the parameters themselves. Instead, the parameters will be used to convert count rates into estimated enrichment. In other cases, this bias source is potentially more important. For example, in tomographic gamma scanning, there is an emission stage which depends on predictors (the 'system matrix') that are estimated with error during the transmission stage. In this paper, we provide background information for the impact and treatment of errors in predictors, present results of candidate methods of compensating for the effect, review some of the nondestructive assay situations where errors in predictors occurs, and provide guidance for when errors in predictors should be considered in nondestructive assay

  3. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  4. A correction for emittance-measurement errors caused by finite slit and collector widths

    International Nuclear Information System (INIS)

    Connolly, R.C.

    1992-01-01

    One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs

  5. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  6. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  7. Assessment of Sampling Error Associated with Collection and Analysis of Soil Samples at a Firing Range Contaminated with HMX

    National Research Council Canada - National Science Library

    Jenkins, Thomas F

    1997-01-01

    Short-range and mid-range (grid size) spatial heterogeneity in explosives concentrations within surface soils was studied at an active antitank firing range at the Canadian Force Base-Valcartier, Val-Belair, Quebec...

  8. Human-Induced Effects on RSS Ranging Measurements for Cooperative Positioning

    DEFF Research Database (Denmark)

    Della Rosa, Francescantonio; Pelosi, Mauro; Nurmi, Jari

    2012-01-01

    of human-induced perturbations for enhancing the final positioning accuracy through cooperative schemes has been assessed. It has been proved that the effect of cooperation is very limited if human factors are not taken into account when performing experimental activities.......We present experimental evaluations of human-induced perturbations on received-signal-strength-(RSS-) based ranging measurements for cooperative mobile positioning. To the best of our knowledge, this work is the first attempt to gain insight and understand the impact of both body loss and hand grip...... on the RSS for enhancing proximity measurements among neighbouring devices in cooperative scenarios. Our main contribution is represented by experimental investigations. Analysis of the errors introduced in the distance estimation using path-loss-based methods has been carried out. Moreover, the exploitation...

  9. Measurement error potential and control when quantifying volatile hydrocarbon concentrations in soils

    International Nuclear Information System (INIS)

    Siegrist, R.L.

    1991-01-01

    Due to their widespread use throughout commerce and industry, volatile hydrocarbons such as toluene, trichloroethene, and 1, 1,1-trichloroethane routinely appears as principal pollutants in contamination of soil system hydrocarbons is necessary to confirm the presence of contamination and its nature and extent; to assess site risks and the need for cleanup; to evaluate remedial technologies; and to verify the performance of a selected alternative. Decisions regarding these issues have far-reaching impacts and, ideally, should be based on accurate measurements of soil hydrocarbon concentrations. Unfortunately, quantification of volatile hydrocarbons in soils is extremely difficult and there is normally little understanding of the accuracy and precision of these measurements. Rather, the assumptions often implicitly made that the hydrocarbon data are sufficiently accurate for the intended purpose. This appear presents a discussion of measurement error potential when quantifying volatile hydrocarbons in soils, and outlines some methods for understanding the managing these errors

  10. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  11. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  13. Measurement error of spiral CT volumetry: influence of low dose CT technique

    International Nuclear Information System (INIS)

    Chung, Myung Jin; Cho, Jae Min; Lee, Tae Gyu; Cho, Sung Bum; Kim, Seog Joon; Baik, Sang Hyun

    2004-01-01

    To examine the possible measurement errors of lung nodule volumetry at the various scan parameters by using a small nodule phantom. We obtained images of a nodule phantom using a spiral CT scanner. The nodule phantom was made of paraffin and urethane and its real volume was known. For the CT scanning experiments, we used three different values for both the pitch of the table feed, i.e. 1:1, 1:15 and 1:2, and the tube current, i.e. 40 mA, 80 mA and 120 mA. All of the images acquired through CT scanning were reconstructed three dimensionally and measured with volumetry software. We tested the correlation between the true volume and the measured volume for each set of parameters using linear regression analysis. For the pitches of table feed of 1:1, 1:1.5 and 1:2, the mean relative errors were 23.3%, 22.8% and 22.6%, respectively. There were perfect correlations among the three sets of measurements (Pearson's coefficient = 1.000, p< 0.001). For the tube currents of 40 mA, 80 mA and 120 mA, the mean relative errors were 22.6%, 22.6% and 22.9%, respectively. There were perfect correlations among them (Pearson's coefficient=1.000, p<0.001). In the measurement of the volume of the lung nodule using spiral CT, the measurement error was not increased in spite of the tube current being decreased or the pitch of table feed being increased

  14. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    Science.gov (United States)

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  15. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  16. General problems of metrology and indirect measuring in cardiology: error estimation criteria for indirect measurements of heart cycle phase durations

    Directory of Open Access Journals (Sweden)

    Konstantine K. Mamberger

    2012-11-01

    Full Text Available Aims This paper treats general problems of metrology and indirect measurement methods in cardiology. It is aimed at an identification of error estimation criteria for indirect measurements of heart cycle phase durations. Materials and methods A comparative analysis of an ECG of the ascending aorta recorded with the use of the Hemodynamic Analyzer Cardiocode (HDA lead versus conventional V3, V4, V5, V6 lead system ECGs is presented herein. Criteria for heart cycle phase boundaries are identified with graphic mathematical differentiation. Stroke volumes of blood SV calculated on the basis of the HDA phase duration measurements vs. echocardiography data are compared herein. Results The comparative data obtained in the study show an averaged difference at the level of 1%. An innovative noninvasive measuring technology originally developed by a Russian R & D team offers measuring stroke volume of blood SV with a high accuracy. Conclusion In practice, it is necessary to take into account some possible errors in measurements caused by hardware. Special attention should be paid to systematic errors.

  17. Correction for dynamic bias error in transmission measurements of void fraction

    International Nuclear Information System (INIS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-01-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  18. Measurement of tokamak error fields using plasma response and its applicability to ITER

    International Nuclear Information System (INIS)

    Strait, E.J.; Buttery, R.J.; Chu, M.S.; Garofalo, A.M.; La Haye, R.J.; Schaffer, M.J.; Casper, T.A.; Gribov, Y.; Hanson, J.M.; Reimerdes, H.; Volpe, F.A.

    2014-01-01

    The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed ‘error fields’. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amount of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER's diagnostic systems. The use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible. (paper)

  19. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    Science.gov (United States)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  20. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  1. Some effects of random dose measurement errors on analysis of atomic bomb survivor data

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1985-01-01

    The effects of random dose measurement errors on analyses of atomic bomb survivor data are described and quantified for several procedures. It is found that the ways in which measurement error is most likely to mislead are through downward bias in the estimated regression coefficients and through distortion of the shape of the dose-response curve. The magnitude of the bias with simple linear regression is evaluated for several dose treatments including the use of grouped and ungrouped data, analyses with and without truncation at 600 rad, and analyses which exclude doses exceeding 200 rad. Limited calculations have also been made for maximum likelihood estimation based on Poisson regression. 16 refs., 6 tabs

  2. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  3. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  4. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  5. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  6. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  7. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  8. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  9. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  10. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  11. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  12. Analysis and compensation of synchronous measurement error for multi-channel laser interferometer

    International Nuclear Information System (INIS)

    Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong

    2017-01-01

    Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s −1 , the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay. (paper)

  13. Analysis and compensation of synchronous measurement error for multi-channel laser interferometer

    Science.gov (United States)

    Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong

    2017-05-01

    Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.

  14. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  15. Feasibility of RACT for 3D dose measurement and range verification in a water phantom

    Energy Technology Data Exchange (ETDEWEB)

    Alsanea, Fahed [School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, Indiana 47907-2051 (United States); Moskvin, Vadim [Radiation Oncology, Indiana University School of Medicine, 535 Barnhill Drive, RT 041, Indianapolis, Indiana 46202-5289 (United States); Stantz, Keith M., E-mail: kstantz@purdue.edu [School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, Indiana 47907-2051 and Radiology and Imaging Sciences, Indiana University School of Medicine, 950 West Walnut Street, Indianapolis, Indiana 46202-5289 (United States)

    2015-02-15

    Purpose: The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Methods: Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). Results: The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error < 150 μm) for a 1 cGy Bragg peak dose, where the integral dose within the Bragg peak was measured to within 2%. For existing hydrophone detector sensitivities, a Bragg peak dose of 1.6 cGy is possible. Conclusions: This study demonstrates that computed tomographic scanner based on ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly

  16. The Construction and Calibration of a LADAR (Laser Detection and Ranging) Cross-Section Measurement Range

    Science.gov (United States)

    1985-12-01

    resonator optics consist of two porro prisms which are oriented 900 from one another about the cavity’s optical axis. In other words, the roof edges of each... prism are perpendicular to one another. The Nd:YAG laser rod measures 5 mm in diameter by 75 mm long and is optically pumped by a Xenon flashlamp. Q...Switching of the laser is performed by a Pockels Cell. A dielectric polarizer is sealed between two right angle prisms which are joined symetrically

  17. Measurements on the extended range of the wake

    International Nuclear Information System (INIS)

    Kumbartzki, G.J.; Kroesing, G; Neuburger, H.

    1981-01-01

    The Coulomb explosion of H 2 + -ions at 28 MeV is used to probe the wake over a range of about 400 A in Al. Preliminary results give food agreement with the wavelength prediction of the simple plasma oscillation wake model. (author)

  18. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  19. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  20. Optics measurement algorithms and error analysis for the proton energy frontier

    Science.gov (United States)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  1. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  2. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin

    2009-01-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  3. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Extending the range of turbidity measurement using polarimetry

    Science.gov (United States)

    Baba, Justin S.

    2017-11-21

    Turbidity measurements are obtained by directing a polarized optical beam to a scattering sample. Scattered portions of the beam are measured in orthogonal polarization states to determine a scattering minimum and a scattering maximum. These values are used to determine a degree of polarization of the scattered portions of the beam, and concentrations of scattering materials or turbidity can be estimated using the degree of polarization. Typically, linear polarizations are used, and scattering is measured along an axis that orthogonal to the direction of propagation of the polarized optical beam.

  5. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  6. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    Sanfilippo, L.

    1987-01-01

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  7. Long-Range Channel Measurements on Small Terminal Antennas Using Optics

    DEFF Research Database (Denmark)

    Yanakiev, Boyan; Nielsen, Jesper Ødum; Christensen, Morten

    2012-01-01

    In this paper, details are given on a novel measurement device for radio propagation-channel measurements. To avoid measurement errors due to the conductive cables on small terminal antennas, as well as to improve the handling of the prototypes under investigation, an optical measurement device has...

  8. An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.

    Science.gov (United States)

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-04-25

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.

  9. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  10. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  11. The determination of carbon dioxide concentration using atmospheric pressure ionization mass spectrometry/isotopic dilution and errors in concentration measurements caused by dryers.

    Science.gov (United States)

    DeLacy, Brendan G; Bandy, Alan R

    2008-01-01

    An atmospheric pressure ionization mass spectrometry/isotopically labeled standard (APIMS/ILS) method has been developed for the determination of carbon dioxide (CO(2)) concentration. Descriptions of the instrumental components, the ionization chemistry, and the statistics associated with the analytical method are provided. This method represents an alternative to the nondispersive infrared (NDIR) technique, which is currently used in the atmospheric community to determine atmospheric CO(2) concentrations. The APIMS/ILS and NDIR methods exhibit a decreased sensitivity for CO(2) in the presence of water vapor. Therefore, dryers such as a nafion dryer are used to remove water before detection. The APIMS/ILS method measures mixing ratios and demonstrates linearity and range in the presence or absence of a dryer. The NDIR technique, on the other hand, measures molar concentrations. The second half of this paper describes errors in molar concentration measurements that are caused by drying. An equation describing the errors was derived from the ideal gas law, the conservation of mass, and Dalton's Law. The purpose of this derivation was to quantify errors in the NDIR technique that are caused by drying. Laboratory experiments were conducted to verify the errors created solely by the dryer in CO(2) concentration measurements post-dryer. The laboratory experiments verified the theoretically predicted errors in the derived equations. There are numerous references in the literature that describe the use of a dryer in conjunction with the NDIR technique. However, these references do not address the errors that are caused by drying.

  12. Intensity autocorrelation measurements of frequency combs in the terahertz range

    Science.gov (United States)

    Benea-Chelmus, Ileana-Cristina; Rösch, Markus; Scalari, Giacomo; Beck, Mattias; Faist, Jérôme

    2017-09-01

    We report on direct measurements of the emission character of quantum cascade laser based frequency combs, using intensity autocorrelation. Our implementation is based on fast electro-optic sampling, with a detection spectral bandwidth matching the emission bandwidth of the comb laser, around 2.5 THz. We find the output of these frequency combs to be continuous even in the locked regime, but accompanied by a strong intensity modulation. Moreover, with our record temporal resolution of only few hundreds of femtoseconds, we can resolve correlated intensity modulation occurring on time scales as short as the gain recovery time, about 4 ps. By direct comparison with pulsed terahertz light originating from a photoconductive emitter, we demonstrate the peculiar emission pattern of these lasers. The measurement technique is self-referenced and ultrafast, and requires no reconstruction. It will be of significant importance in future measurements of ultrashort pulses from quantum cascade lasers.

  13. Free tropospheric measurements of CS2 over a 45 deg N to 45 deg S latitude range

    Science.gov (United States)

    Tucker, B. J.; Maroulis, P. J.; Bandy, A. R.

    1985-01-01

    The mean value obtained from 52 free tropospheric measurements of CS2 over the 45 deg N-45 deg S latitude range was 5.7 pptv, with standard deviation and standard error of 1.9 and 0.3 pptv, respectively. Large fluctuations in the CS2 concentration are observed which reflect the apparent short atmospheric residence time and inhomogeneities in the surface sources of CS2. The amounts of CS2 in the Northern and Southern Hemispheres are statistically equal.

  14. Determination of the range of control limits in radioimmunoassay measurements

    International Nuclear Information System (INIS)

    Fiori, A.M.C.

    1981-01-01

    A grouping technique is proposed for control limits in radioimmunoassay measurements. It has the advantage of working with control limits of 99.7% without the inconvenience of the confidence intervals. The method is practical and simple. It provides considerable flexibility for the processing of data. As the number of samples increases, the control limits become better defined. (author) [es

  15. Influence of material surface on the scanning error of a powder-free 3D measuring system.

    Science.gov (United States)

    Kurz, Michael; Attin, Thomas; Mehl, Albert

    2015-11-01

    This study aims to evaluate the accuracy of a powder-free three-dimensional (3D) measuring system (CEREC Omnicam, Sirona), when scanning the surface of a material at different angles. Additionally, the influence of water was investigated. Nine different materials were combined with human tooth surface (enamel) to create n = 27 specimens. These materials were: Controls (InCoris TZI and Cerec Guide Bloc), ceramics (Vitablocs® Mark II and IPS Empress CAD), metals (gold and amalgam) and composites (Tetric Ceram, Filtek Supreme A2B and A2E). The highly polished samples were scanned at different angles with and without water. The 216 scans were then analyzed and descriptive statistics were obtained. The height difference between the tooth and material surfaces, as measured with the 3D scans, ranged from 0.83 μm (±2.58 μm) to -14.79 μm (±3.45 μm), while the scan noise on the materials was between 3.23 μm (±0.79 μm) and 14.24 μm (±6.79 μm) without considering the control groups. Depending on the thickness of the water film, measurement errors in the order of 300-1,600 μm could be observed. The inaccuracies between the tooth and material surfaces, as well as the scan noise for the materials, were within the range of error for measurements used for conventional impressions and are therefore negligible. The presence of water, however, greatly affects the scan. The tested powder-free 3D measuring system can safely be used to scan different material surfaces without the prior application of a powder, although drying of the surface prior to scanning is highly advisable.

  16. Time Biases in laser ranging measurements; impacts on geodetic products (Reference Frame and Orbitography)

    Science.gov (United States)

    Belli, A.; Exertier, P.; Lemoine, F. G.; Chinn, D. S.; Zelensky, N. P.

    2017-12-01

    The GGOS objectives are to maintain a geodetic network with an accuracy of 1 mm and a stability of 0.1 mm per year. For years, the laser ranging technique, which provide very accurate absolute distances to geodetic targets enable to determine the scale factor as well as coordinates of the geocenter. In order to achieve this goal, systematic errors appearing in the laser ranging measurements must be considered and solved. In addition to Range Bias (RB), which is the primary source of uncertainty of the technique, Time Bias (TB) has been recently detected by using the Time Transfer by Laser Link (T2L2) space instrument capability on-board the satellite Jason-2. Instead of determining TB through the precise orbit determination that is applied to commonly used geodetic targets like LAGEOS to estimate global geodetic products, we have developed, independently, a dedicated method to transfer time between remote satellite laser ranging stations. As a result, the evolving clock phase shift to UTC of around 30 stations has been determined under the form of time series of time bias per station from 2008 to 2016 with an accuracy of 3-4 ns. It demonstrated the difficulty, in terms of Time & Frequency used technologies, to locally maintain accuracy and long term stability at least in the range of 100 ns that is the current requirement for time measurements (UTC) for the laser ranging technique. Because some laser ranging stations oftently exceed this limit (from 100 ns to a few μs) we have been studying these effects first on the precision orbit determination itself, second on the station positioning. We discuss the impact of TB on LAGEOS and Jason-2 orbits, which appears to affect the along-track component essentially. We also investigate the role of TB in global geodetic parameters as the station coordinates. Finally, we propose to provide the community with time series of time bias of laser ranging stations, under the form of a data- handling-file in order to be included in

  17. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  18. Improvement of vision measurement accuracy using Zernike moment based edge location error compensation model

    International Nuclear Information System (INIS)

    Cui, J W; Tan, J B; Zhou, Y; Zhang, H

    2007-01-01

    This paper presents the Zernike moment based model developed to compensate edge location errors for further improvement of the vision measurement accuracy by compensating the slight changes resulting from sampling and establishing mathematic expressions for subpixel location of theoretical and actual edges which are either vertical to or at an angle with X-axis. Experimental results show that the proposed model can be used to achieve a vision measurement accuracy of up to 0.08 pixel while the measurement uncertainty is less than 0.36μm. It is therefore concluded that as a model which can be used to achieve a significant improvement of vision measurement accuracy, the proposed model is especially suitable for edge location of images with low contrast

  19. ADC border effect and suppression of quantization error in the digital dynamic measurement

    International Nuclear Information System (INIS)

    Bai Li-Na; Liu Hai-Dong; Zhou Wei; Zhai Hong-Qi; Cui Zhen-Jian; Zhao Ming-Ying; Gu Xiao-Qian; Liu Bei-Ling; Huang Li-Bei; Zhang Yong

    2017-01-01

    The digital measurement and processing is an important direction in the measurement and control field. The quantization error widely existing in the digital processing is always the decisive factor that restricts the development and applications of the digital technology. In this paper, we find that the stability of the digital quantization system is obviously better than the quantization resolution. The application of a border effect in the digital quantization can greatly improve the accuracy of digital processing. Its effective precision has nothing to do with the number of quantization bits, which is only related to the stability of the quantization system. The high precision measurement results obtained in the low level quantization system with high sampling rate have an important application value for the progress in the digital measurement and processing field. (paper)

  20. Measurement errors in multifrequency bioelectrical impedance analyzers with and without impedance electrode mismatch

    International Nuclear Information System (INIS)

    Bogónez-Franco, P; Nescolarde, L; Bragós, R; Rosell-Ferrer, J; Yandiola, I

    2009-01-01

    The purpose of this study is to compare measurement errors in two commercially available multi-frequency bioimpedance analyzers, a Xitron 4000B and an ImpediMed SFB7, including electrode impedance mismatch. The comparison was made using resistive electrical models and in ten human volunteers. We used three different electrical models simulating three different body segments: the right-side, leg and thorax. In the electrical models, we tested the effect of the capacitive coupling of the patient to ground and the skin–electrode impedance mismatch. Results showed that both sets of equipment are optimized for right-side measurements and for moderate skin–electrode impedance mismatch. In right-side measurements with mismatch electrode, 4000B is more accurate than SFB7. When an electrode impedance mismatch was simulated, errors increased in both bioimpedance analyzers and the effect of the mismatch in the voltage detection leads was greater than that in current injection leads. For segments with lower impedance as the leg and thorax, SFB7 is more accurate than 4000B and also shows less dependence on electrode mismatch. In both devices, impedance measurements were not significantly affected (p > 0.05) by the capacitive coupling to ground

  1. A bivariate measurement error model for semicontinuous and continuous variables: Application to nutritional epidemiology.

    Science.gov (United States)

    Kipnis, Victor; Freedman, Laurence S; Carroll, Raymond J; Midthune, Douglas

    2016-03-01

    Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.

  2. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  3. Comparing Two Inferential Approaches to Handling Measurement Error in Mixed-Mode Surveys

    Directory of Open Access Journals (Sweden)

    Buelens Bart

    2017-06-01

    Full Text Available Nowadays sample survey data collection strategies combine web, telephone, face-to-face, or other modes of interviewing in a sequential fashion. Measurement bias of survey estimates of means and totals are composed of different mode-dependent measurement errors as each data collection mode has its own associated measurement error. This article contains an appraisal of two recently proposed methods of inference in this setting. The first is a calibration adjustment to the survey weights so as to balance the survey response to a prespecified distribution of the respondents over the modes. The second is a prediction method that seeks to correct measurements towards a benchmark mode. The two methods are motivated differently but at the same time coincide in some circumstances and agree in terms of required assumptions. The methods are applied to the Labour Force Survey in the Netherlands and are found to provide almost identical estimates of the number of unemployed. Each method has its own specific merits. Both can be applied easily in practice as they do not require additional data collection beyond the regular sequential mixed-mode survey, an attractive element for national statistical institutes and other survey organisations.

  4. Lifetime measurements in the 10-13 s range

    International Nuclear Information System (INIS)

    Bellini, Dzh.; Foa, L.; Dzhordzhi, M.

    1984-01-01

    Semiconducting detectors used in experimental high energy physics are described. Performances of Ge- and Si detectors and telescopes developed on their base as well as some problems associated with separation of coherent and incoherent events are described in detail. New fields are considered of semiconductor detector application: lifetime measurements of heavy particles decaying via weak interaction, such as D-mesons as well as the procedure of determination of the meson production and disintegration point with a space resolution enabling one to measure the length of meson path. The space resolution of detectors operating as proportional chambeps approaches 10-20 μm. Principles of devising the electronics for active target processors are described, solid state detectors being used for the latter

  5. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao

    2016-08-01

    Full Text Available Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications.

  6. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-08-11

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications.

  7. High Dynamic Range Nonlinear Measurement using Analog Cancellation

    Science.gov (United States)

    2012-10-01

    shield around sensitive areas. The target may also be sensitive to radiated coupling from the system and will benefit from a shield box or Faraday ... cage , if it is not already enclosed. On the shared measurement path and through the target, cross-channel coupling cannot be prevented, so low-PIM...testing is desired, traditional filtering is recommended, as the primary benefits of the analog canceller are effectively nullified. 2.4 Wideband

  8. Temperature measurement in the liquid helium range at pressure

    International Nuclear Information System (INIS)

    Itskevich, E.S.; Krajdenov, V.F.

    1978-01-01

    The use of bronze and germanium resistance thermometers and the use of a (Au + 0.07 % Fe)-Cu thermocouple for temperature measurements from 1.5 to 4.2 K in the hydrostatic compression of up to 10 kbar are considered. To this aim, the thermometer resistance as a function of temperature and pressure is measured. It is revealed that pressure does not change the thermometric response of the bronze resistance thermometer but only shifts it to the region of lower temperatures. The identical investigations of the germanium resistance thermometer shows that strong temperature dependence and the shift of its thermometric response under the influence of pressure make the use of germanium resistance thermometers in high-pressure chambers very inconvenient. The results of the analysis of the (Au + 0.07 % Fe) - Cu thermocouple shows that with a 2 per cent accuracy the thermocouple Seebeck coefficient does not depend on pressure. It permits to use this thermocouple for temperature measurements at high pressures

  9. Validation of a photography-based goniometry method for measuring joint range of motion.

    Science.gov (United States)

    Blonna, Davide; Zarkadas, Peter C; Fitzsimmons, James S; O'Driscoll, Shawn W

    2012-01-01

    A critical component of evaluating the outcomes after surgery to restore lost elbow motion is the range of motion (ROM) of the elbow. This study examined if digital photography-based goniometry is as accurate and reliable as clinical goniometry for measuring elbow ROM. Instrument validity and reliability for photography-based goniometry were evaluated for a consecutive series of 50 elbow contractures by 4 observers with different levels of elbow experience. Goniometric ROM measurements were taken with the elbows in full extension and full flexion directly in the clinic (once) and from digital photographs (twice in a blinded random manner). Instrument validity for photography-based goniometry was extremely high (intraclass correlation coefficient: extension = 0.98, flexion = 0.96). For extension and flexion measurements by the expert surgeon, systematic error was negligible (0° and 1°, respectively). Limits of agreement were 7° (95% confidence interval [CI], 5° to 9°) and -7° (95% CI, -5° to -9°) for extension and 8° (95% CI, 6° to 10°) and -7° (95% CI, -5° to -9°) for flexion. Interobserver reliability for photography-based goniometry was better than that for clinical goniometry. The least experienced observer's photographic goniometry measurements were closer to the reference measurements than the clinical goniometry measurements. Photography-based goniometry is accurate and reliable for measuring elbow ROM. The photography-based method relied less on observer expertise than clinical goniometry. This validates an objective measure of patient outcome without requiring doctor-patient contact at a tertiary care center, where most contracture surgeries are done. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  10. Measuring systolic arterial blood pressure. Possible errors from extension tubes or disposable transducer domes.

    Science.gov (United States)

    Rothe, C F; Kim, K C

    1980-11-01

    The purpose of this study was to evaluate the magnitude of possible error in the measurement of systolic blood pressure if disposable, built-in diaphragm, transducer domes or long extension tubes between the patient and pressure transducer are used. Sinusoidal or arterial pressure patterns were generated with specially designed equipment. With a long extension tube or trapped air bubbles, the resonant frequency of the catheter system was reduced so that the arterial pulse was amplified as it acted on the transducer and, thus, gave an erroneously high systolic pressure measurement. The authors found this error to be as much as 20 mm Hg. Trapped air bubbles, not stopcocks or connections, per se, lead to poor fidelity. The utility of a continuous catheter flush system (Sorenson, Intraflow) to estimate the resonant frequency and degree of damping of a catheter-transducer system is described, as are possibly erroneous conclusions. Given a rough estimate of the resonant frequency of a catheter-transducer system and the magnitude of overshoot in response to a pulse, the authors present a table to predict the magnitude of probable error. These studies confirm the variability and unreliability of static calibration that may occur using some safety diaphragm domes and show that the system frequency response is decreased if air bubbles are trapped between the diaphragms. The authors conclude that regular procedures should be established to evaluate the accuracy of the pressure measuring systems in use, the transducer should be placed as close to the patient as possible, the air bubbles should be assiduously eliminated from the system.

  11. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  12. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    Science.gov (United States)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  13. Errors detection in viscosity temperature measurements. Pt. B. Results, usefullness. Fehlersuche bei Viskositaet-Temperatur-Messungen. T. B. Resultate, Nuetzlichkeit

    Energy Technology Data Exchange (ETDEWEB)

    Schwen, R. (BASF, Farbenlaboratorium, Ludwigshafen am Rhein (Germany)); Puhl, H. (BASF, Ammoniaklaboratorium, Ludwigshafen am Rhein (Germany))

    1992-06-01

    The temperature dependence of the viscosity spreads often over a large range. It can be measured with less then one per cent error with usual effort, but the result cannot yet be controlled to the same accuracy: Graphic methods are far too incorrect and the numerous approximate equations given in literature do not adequately represent the true shape of the curves of all types of substances at the whole range of interesting temperatures. The different slopes and curvatures of the temperature dependence of the dynamic and kinematic viscosities can now be represented by means of one-term or multi-term exponential-functions with a maximum of eight coefficients. The Antoine equation is included in this investigation and the Ubbelohde-Walter-equation for comparison only. Tests on more than 400 data sets show, that there is no single equation to cope with all existing slopes. The numerical values of the coefficients are determined by the Marquardt statistical search method; the starting values are obtained by fixed rules. Using a non-linear regression of exponential sums, the method exactly describes the viscosity-temperature-behavior of normal liquids and real gases as well as the supercritical region over any desired range starting with four measured values and being complete with nine measured values or more; it allows tabellation, interpolation and, with caution, extrapolation. In the first part published, the problem and the mathematic procedure were discussed. The following publication presents the results and considers the applicability. (orig.).

  14. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  15. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  17. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    Science.gov (United States)

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Unreliability and error in the military's "gold standard" measure of sexual harassment by education and gender.

    Science.gov (United States)

    Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S

    2011-01-01

    The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.

  19. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  20. The error analysis of Lobular and segmental division of right liver by volume measurement.

    Science.gov (United States)

    Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin

    2017-07-01

    The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Application of a repeat-measure biomarker measurement error model to 2 validation studies: examination of the effect of within-person variation in biomarker measurements.

    Science.gov (United States)

    Preis, Sarah Rosner; Spiegelman, Donna; Zhao, Barbara Bojuan; Moshfegh, Alanna; Baer, David J; Willett, Walter C

    2011-03-15

    Repeat-biomarker measurement error models accounting for systematic correlated within-person error can be used to estimate the correlation coefficient (ρ) and deattenuation factor (λ), used in measurement error correction. These models account for correlated errors in the food frequency questionnaire (FFQ) and the 24-hour diet recall and random within-person variation in the biomarkers. Failure to account for within-person variation in biomarkers can exaggerate correlated errors between FFQs and 24-hour diet recalls. For 2 validation studies, ρ and λ were calculated for total energy and protein density. In the Automated Multiple-Pass Method Validation Study (n=471), doubly labeled water (DLW) and urinary nitrogen (UN) were measured twice in 52 adults approximately 16 months apart (2002-2003), yielding intraclass correlation coefficients of 0.43 for energy (DLW) and 0.54 for protein density (UN/DLW). The deattenuated correlation coefficient for protein density was 0.51 for correlation between the FFQ and the 24-hour diet recall and 0.49 for correlation between the FFQ and the biomarker. Use of repeat-biomarker measurement error models resulted in a ρ of 0.42. These models were similarly applied to the Observing Protein and Energy Nutrition Study (1999-2000). In conclusion, within-person variation in biomarkers can be substantial, and to adequately assess the impact of correlated subject-specific error, this variation should be assessed in validation studies of FFQs. © The Author 2011. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved.

  2. Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band

    Science.gov (United States)

    Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.

    2018-01-01

    Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.

  3. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †

    Science.gov (United States)

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-01-01

    An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006

  4. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.

    Science.gov (United States)

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-03-13

    An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  5. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil

    Directory of Open Access Journals (Sweden)

    Mengyuan Xu

    2018-03-01

    Full Text Available An innovative array of magnetic coils (the discrete Rogowski coil—RC with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  6. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  7. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S. [KIPAC, Stanford University, 452 Lomita Mall, Stanford, CA 94309 (United States); Kratochvil, J. M.; Huffenberger, K. M. [Department of Physics, University of Miami, Coral Gables, FL 33124 (United States); May, M. [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Haiman, Z.; Jernigan, J. G., E-mail: djbard@slac.stanford.edu [Department of Astronomy and Astrophysics, Columbia University, New York, NY 10027 (United States); and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  8. Measuring Relativistic effects in the field of the Earth with Laser Ranged Satellites and the LARASE research program

    Science.gov (United States)

    Lucchesi, David; Anselmo, Luciano; Bassan, Massimo; Magnafico, Carmelo; Pardini, Carmen; Peron, Roberto; Pucacco, Giuseppe; Stanga, Ruggero; Visco, Massimo

    2017-04-01

    The main goal of the LARASE (LAser RAnged Satellites Experiment) research program is to obtain refined tests of Einstein's theory of General Relativity (GR) by means of very precise measurements of the round-trip time among a number of ground stations of the International Laser Ranging Service (ILRS) network and a set of geodetic satellites. These measurements are guaranteed by means of the powerful and precise Satellite Laser Ranging (SLR) technique. In particular, a big effort of LARASE is dedicated to improve the dynamical models of the LAGEOS, LAGEOS II and LARES satellites, with the objective to obtain a more precise and accurate determination of their orbit. These activities contribute to reach a final error budget that should be robust and reliable in the evaluation of the main systematic errors sources that come to play a major role in masking the relativistic precession on the orbit of these laser-ranged satellites. These error sources may be of gravitational and non-gravitational origin. It is important to stress that a more accurate and precise orbit determination, based on more reliable dynamical models, represents a fundamental prerequisite in order to reach a sub-mm precision in the root-mean-square of the SLR range residuals and, consequently, to gather benefits in the fields of geophysics and space geodesy, such as stations coordinates knowledge, geocenter determination and the realization of the Earth's reference frame. The results reached over the last year will be presented in terms of the improvements achieved in the dynamical model, in the orbit determination and, finally, in the measurement of the relativistic precessions that act on the orbit of the satellites considered.

  9. Assessment of long-range kinematic GPS positioning errors by comparison with airborne laser altimetry and satellite altimetry

    DEFF Research Database (Denmark)

    Zhang, X.H.; Forsberg, René

    2007-01-01

    Long-range airborne laser altimetry and laser scanning (LIDAR) or airborne gravity surveys in, for example, polar or oceanic areas require airborne kinematic GPS baselines of many hundreds of kilometers in length. In such instances, with the complications of ionospheric biases, it can be a real...... challenge for traditional differential kinematic GPS software to obtain reasonable solutions. In this paper, we will describe attempts to validate an implementation of the precise point positioning (PPP) technique on an aircraft without the use of a local GPS reference station. We will compare PPP solutions...... of the Arctic Ocean north of Greenland, near-coincident in time and space with the ICESat satellite laser altimeter. Both of these flights were more than 800 km long. Comparisons between different GPS methods and four different software packages do not suggest a clear preference for any one, with the heights...

  10. Comparison of the balance accelerometer measure and balance error scoring system in adolescent concussions in sports.

    Science.gov (United States)

    Furman, Gabriel R; Lin, Chia-Cheng; Bellanca, Jennica L; Marchetti, Gregory F; Collins, Michael W; Whitney, Susan L

    2013-06-01

    High-technology methods demonstrate that balance problems may persist up to 30 days after a concussion, whereas with low-technology methods such as the Balance Error Scoring System (BESS), performance becomes normal after only 3 days based on previously published studies in collegiate and high school athletes. To compare the National Institutes of Health's Balance Accelerometer Measure (BAM) with the BESS regarding the ability to detect differences in postural sway between adolescents with sports concussions and age-matched controls. Cohort study (diagnosis); Level of evidence, 2. Forty-three patients with concussions and 27 control participants were tested with the standard BAM protocol, while sway was quantified using the normalized path length (mG/s) of pelvic accelerations in the anterior-posterior direction. The BESS was scored by experts using video recordings. The BAM was not able to discriminate between healthy and concussed adolescents, whereas the BESS, especially the tandem stance conditions, was good at discriminating between healthy and concussed adolescents. A total BESS score of 21 or more errors optimally identified patients in the acute concussion group versus healthy participants at 60% sensitivity and 82% specificity. The BAM is not as effective as the BESS in identifying abnormal postural control in adolescents with sports concussions. The BESS, a simple and economical method of assessing postural control, was effective in discriminating between young adults with acute concussions and young healthy people, suggesting that the test has value in the assessment of acute concussions.

  11. Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Buckman, Dennis W.; Dodd, Kevin W.; Guenther, Patricia M.; Krebs-Smith, Susan M.; Subar, Amy F.; Tooze, Janet A.; Carroll, Raymond J.; Freedman, Laurence S.

    2009-01-01

    Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach

  12. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    Science.gov (United States)

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  13. Temperature and SAR measurement errors in the evaluation of metallic linear structures heating during MRI using fluoroptic (registered) probes

    Energy Technology Data Exchange (ETDEWEB)

    Mattei, E [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Triventi, M [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Calcagnini, G [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Censi, F [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Kainz, W [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bassen, H I [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bartolini, P [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy)

    2007-03-21

    The purpose of this work is to evaluate the error associated with temperature and SAR measurements using fluoroptic (registered) temperature probes on pacemaker (PM) leads during magnetic resonance imaging (MRI). We performed temperature measurements on pacemaker leads, excited with a 25, 64, and 128 MHz current. The PM lead tip heating was measured with a fluoroptic (registered) thermometer (Luxtron, Model 3100, USA). Different contact configurations between the pigmented portion of the temperature probe and the PM lead tip were investigated to find the contact position minimizing the temperature and SAR underestimation. A computer model was used to estimate the error made by fluoroptic (registered) probes in temperature and SAR measurement. The transversal contact of the pigmented portion of the temperature probe and the PM lead tip minimizes the underestimation for temperature and SAR. This contact position also has the lowest temperature and SAR error. For other contact positions, the maximum temperature error can be as high as -45%, whereas the maximum SAR error can be as high as -54%. MRI heating evaluations with temperature probes should use a contact position minimizing the maximum error, need to be accompanied by a thorough uncertainty budget and the temperature and SAR errors should be specified.

  14. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  15. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    Science.gov (United States)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that

  16. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  17. Correction of thickness measurement errors for two adjacent sheet structures in MR images

    International Nuclear Information System (INIS)

    Cheng Yuanzhi; Wang Shuguo; Sato, Yoshinobu; Nishii, Takashi; Tamura, Shinichi

    2007-01-01

    We present a new method for measuring the thickness of two adjacent sheet structures in MR images. In the hip joint, in which the femoral and acetabular cartilages are adjacent to each other, a conventional measurement technique based on the second derivative zero crossings (called the zero-crossings method) can introduce large underestimation errors in measurements of cartilage thickness. In this study, we have developed a model-based approach for accurate thickness measurement. We model the imaging process for two adjacent sheet structures, which simulate the two articular cartilages in the hip joint. This model can be used to predict the shape of the intensity profile along the sheet normal orientation. Using an optimization technique, the model parameters are adjusted to minimize the differences between the predicted intensity profile and the actual intensity profiles observed in the MR data. The set of model parameters that minimize the difference between the model and the MR data yield the thickness estimation. Using three phantoms and one normal cadaveric specimen, the usefulness of the new model-based method is demonstrated by comparing the model-based results with the results generated using the zero-crossings method. (author)

  18. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    Science.gov (United States)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  19. Correcting the error in neutron moisture probe measurements caused by a water density gradient

    International Nuclear Information System (INIS)

    Wilson, D.J.

    1988-01-01

    If a neutron probe lies in or near a water density gradient, the probe may register a water density different to that at the measuring point. The effect of a thin stratum of soil containing an excess or depletion of water at various distances from a probe in an otherwise homogeneous system has been calculated, producing an 'importance' curve. The effect of these strata can be integrated over the soil region in close proximity to the probe resulting in the net effect of the presence of a water density gradient. In practice, the probe is scanned through the point of interest and the count rate at that point is corrected for the influence of the water density on each side of it. An example shows that the technique can reduce an error of 10 per cent to about 2 per cent

  20. Evaluation of error bands and confidence limits for thermal measurements in the CFTL bundle

    International Nuclear Information System (INIS)

    Childs, K.W.; Sanders, J.P.; Conklin, J.C.

    1979-01-01

    Surface cladding temperatures for the fuel rod simulators in the Core Flow Test Loop (CFTL) must be inferred from a measurement at a thermocouple junction within the rod. This step requires the evaluation of the thermal field within the rod based on known parameters such as heat generation rate, dimensional tolerances, thermal properties, and contact coefficients. Uncertainties in the surface temperature can be evaluated by assigning error bands to each of the parameters used in the calculation. A statistical method has been employed to establish the confidence limits for the surface temperature from a combination of the standard deviations of the important parameters. This method indicates that for a CFTL fuel rod simulator with a total power of 38 kW and a ratio of maximum to average axial power of 1.21, the 95% confidence limit for the calculated surface temperature is +- 45 0 C at the midpoint of the rod

  1. Statistics and error considerations at the application of SSND T-technique in radon measurement

    International Nuclear Information System (INIS)

    Jonsson, G.

    1993-01-01

    Plastic films are used for the detection of alpha particles from disintegrating radon and radon daughter nuclei. After etching there are tracks (cones) or holes in the film as a result of the exposure. The step from a counted number of tracks/holes per surface unit of the film to a reliable value of the radon and radon daughter level is surrounded by statistical considerations of different nature. Some of them are the number of counted tracks, the length of the time of exposure, the season of the time of exposure, the etching technique and the method of counting the tracks or holes. The number of background tracks of an unexposed film increases the error of the measured radon level. Some of the mentioned effects of statistical nature will be discussed in the report. (Author)

  2. Impact of mixed modes on measurement errors and estimates of change in panel data

    Directory of Open Access Journals (Sweden)

    Alexandru Cernat

    2015-07-01

    Full Text Available Mixed mode designs are receiving increased interest as a possible solution for saving costs in panel surveys, although the lasting effects on data quality are unknown. To better understand the effects of mixed mode designs on panel data we will examine its impact on random and systematic error and on estimates of change. The SF12, a health scale, in the Understanding Society Innovation Panel is used for the analysis. Results indicate that only one variable out of 12 has systematic differences due to the mixed mode design. Also, four of the 12 items overestimate variance of change in time in the mixed mode design. We conclude that using a mixed mode approach leads to minor measurement differences but it can result in the overestimation of individual change compared to a single mode design.

  3. The Euler equation with habits and measurement errors: Estimates on Russian micro data

    Directory of Open Access Journals (Sweden)

    Khvostova Irina

    2016-01-01

    Full Text Available This paper presents estimates of the consumption Euler equation for Russia. The estimation is based on micro-level panel data and accounts for the heterogeneity of agents’ preferences and measurement errors. The presence of multiplicative habits is checked using the Lagrange multiplier (LM test in a generalized method of moments (GMM framework. We obtain estimates of the elasticity of intertemporal substitution and of the subjective discount factor, which are consistent with the theoretical model and can be used for the calibration and the Bayesian estimation of dynamic stochastic general equilibrium (DSGE models for the Russian economy. We also show that the effects of habit formation are not significant. The hypotheses of multiplicative habits (external, internal, and both external and internal are not supported by the data.

  4. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  5. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  7. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  8. On the importance of measurement error correlations in data assimilation for integrated hydrological models

    Science.gov (United States)

    Camporese, Matteo; Botto, Anna

    2017-04-01

    Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows us to integrate multisource observation data in modeling predictions and, in doing so, to reduce uncertainty. For this reason, data assimilation has been recently the focus of much attention also for physically-based integrated hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). One of the typical assumptions in these studies is that the measurement errors are uncorrelated, whereas in certain situations it is reasonable to believe that some degree of correlation occurs, due for example to the fact that a pair of sensors share the same soil type. The goal of this study is to show if and how the measurement error correlations between different observation data play a significant role on assimilation results in a real-world application of an integrated hydrological model. The model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope. The physical model, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m, and width of 2 m. The hillslope is equipped with sensors to monitor the pressure head and soil moisture responses to a series of generated rainfall events applied onto a 60 cm thick sand layer overlying a sandy clay soil. The measurement network is completed by two tipping bucket flow gages to measure the two components (subsurface and surface) of the outflow. By collecting

  9. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    International Nuclear Information System (INIS)

    Moss, A.R.L.

    2000-01-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  10. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    Energy Technology Data Exchange (ETDEWEB)

    Moss, A.R.L

    2000-07-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  11. Valuing urban open space using the travel-cost method and the implications of measurement error.

    Science.gov (United States)

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Calibration of a camera–projector measurement system and error impact analysis

    International Nuclear Information System (INIS)

    Huang, Junhui; Wang, Zhao; Xue, Qi; Gao, Jianmin

    2012-01-01

    In the camera–projector measurement system, calibration is a key to the measurement accuracy; especially, it is more difficult to obtain the same calibration accuracy for projector than camera due to the inaccurate corresponding relationship between its calibration points and imaging points. Thus, based on stereo vision measurement models of the camera and the projector, a calibration method with direct linear transformation (DLT) and bundle adjustment (BA) is introduced to adjust the corresponding relationships for better optimization purpose in this paper, which minimize the effect of inaccurate calibration points. And an integral method is presented to improve the precision of projection patterns to compensate the projector resolution limitation. Moreover impacts of system parameter and calibration points errors are evaluated when the calibration points positions change, which not only provides theoretical guidance for the rational layout of the calibration points, but also can be used for the optimization of system structure. Finally, the calibration of the system is carried out and the experiment results show that better precision can be achieved with those processes. (paper)

  13. Using the area under the curve to reduce measurement error in predicting young adult blood pressure from childhood measures.

    Science.gov (United States)

    Cook, Nancy R; Rosner, Bernard A; Chen, Wei; Srinivasan, Sathanur R; Berenson, Gerald S

    2004-11-30

    Tracking correlations of blood pressure, particularly childhood measures, may be attenuated by within-person variability. Combining multiple measurements can reduce this error substantially. The area under the curve (AUC) computed from longitudinal growth curve models can be used to improve the prediction of young adult blood pressure from childhood measures. Quadratic random-effects models over unequally spaced repeated measures were used to compute the area under the curve separately within the age periods 5-14 and 20-34 years in the Bogalusa Heart Study. This method adjusts for the uneven age distribution and captures the underlying or average blood pressure, leading to improved estimates of correlation and risk prediction. Tracking correlations were computed by race and gender, and were approximately 0.6 for systolic, 0.5-0.6 for K4 diastolic, and 0.4-0.6 for K5 diastolic blood pressure. The AUC can also be used to regress young adult blood pressure on childhood blood pressure and childhood and young adult body mass index (BMI). In these data, while childhood blood pressure and young adult BMI were generally directly predictive of young adult blood pressure, childhood BMI was negatively correlated with young adult blood pressure when childhood blood pressure was in the model. In addition, racial differences in young adult blood pressure were reduced, but not eliminated, after controlling for childhood blood pressure, childhood BMI, and young adult BMI, suggesting that other genetic or lifestyle factors contribute to this difference. 2004 John Wiley & Sons, Ltd.

  14. Cloud cover detection combining high dynamic range sky images and ceilometer measurements

    Science.gov (United States)

    Román, R.; Cazorla, A.; Toledano, C.; Olmo, F. J.; Cachorro, V. E.; de Frutos, A.; Alados-Arboledas, L.

    2017-11-01

    This paper presents a new algorithm for cloud detection based on high dynamic range images from a sky camera and ceilometer measurements. The algorithm is also able to detect the obstruction of the sun. This algorithm, called CPC (Camera Plus Ceilometer), is based on the assumption that under cloud-free conditions the sky field must show symmetry. The symmetry criteria are applied depending on ceilometer measurements of the cloud base height. CPC algorithm is applied in two Spanish locations (Granada and Valladolid). The performance of CPC retrieving the sun conditions (obstructed or unobstructed) is analyzed in detail using as reference pyranometer measurements at Granada. CPC retrievals are in agreement with those derived from the reference pyranometer in 85% of the cases (it seems that this agreement does not depend on aerosol size or optical depth). The agreement percentage goes down to only 48% when another algorithm, based on Red-Blue Ratio (RBR), is applied to the sky camera images. The retrieved cloud cover at Granada and Valladolid is compared with that registered by trained meteorological observers. CPC cloud cover is in agreement with the reference showing a slight overestimation and a mean absolute error around 1 okta. A major advantage of the CPC algorithm with respect to the RBR method is that the determined cloud cover is independent of aerosol properties. The RBR algorithm overestimates cloud cover for coarse aerosols and high loads. Cloud cover obtained only from ceilometer shows similar results than CPC algorithm; but the horizontal distribution cannot be obtained. In addition, it has been observed that under quick and strong changes on cloud cover ceilometers retrieve a cloud cover fitting worse with the real cloud cover.

  15. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    Science.gov (United States)

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, perrors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the

  16. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  17. Calibration, field-testing, and error analysis of a gamma-ray probe for in situ measurement of dry bulk density

    International Nuclear Information System (INIS)

    Bertuzzi, P.; Bruckler, L.; Gabilly, Y.; Gaudu, J.C.

    1987-01-01

    This paper describes a new gamma-ray probe for measuring dry bulk density in the field. This equipment can be used with three different tube spacings (15, 20 and 30 cm). Calibration procedures and local error analyses are proposed for two cases: (1) for the case where the access tubes are parallel, calibration equations are given for three tube spacings. The linear correlation coefficient obtained in the laboratory is satisfactory (0.999), and a local error analysis shows that the standard deviation in the measured dry bulk density is small (+/- 0.02 g/cm 3 ); (2) when the access tubes are not parallel, a new calibration procedure is presented that accounts for and corrects measurement bias due to the deviating probe spacing. The standard deviation associated with the measured dry bulk density is greater (+/- 0.05 g/cm 3 ), but the measurements themselves are regarded as unbiased. After comparisons of core samplings and gamma-ray probe measurements, a field validation of the gamma-ray measurements is presented. Field validation was carried out on a variety of soils (clay, clay loam, loam, and silty clay loam), using gravimetric water contents that varied from 0.11 0.27 and dry bulk densities ranging from 1.30-1.80 g°cm -3 . Finally, an example of dry bulk density field variability is shown, and the spatial variability is analyzed in regard to the measurement errors

  18. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    Science.gov (United States)

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  19. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  20. A note on errors and signal to noise ratio of binary cross-correlation measurements of system impulse response

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1964-02-01

    The sources of error in the measurement of system impulse response using test signals of a discrete interval binary nature are considered. Methods of correcting for the errors due to theoretical imperfections are given and the variance of the estimate of the system impulse response due to random noise is determined. Several topics related to the main topic are considered e.g. determination of a theoretical model from experimental results. General conclusions about the magnitude of the errors due to the theoretical imperfections are made. (author)

  1. A note on errors and signal to noise ratio of binary cross-correlation measurements of system impulse response

    Energy Technology Data Exchange (ETDEWEB)

    Cummins, J D [Dynamics Group, Control and Instrumentation Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1964-02-15

    The sources of error in the measurement of system impulse response using test signals of a discrete interval binary nature are considered. Methods of correcting for the errors due to theoretical imperfections are given and the variance of the estimate of the system impulse response due to random noise is determined. Several topics related to the main topic are considered e.g. determination of a theoretical model from experimental results. General conclusions about the magnitude of the errors due to the theoretical imperfections are made. (author)

  2. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  3. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  4. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  5. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  6. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  7. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    Science.gov (United States)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  8. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Error in interpreting field chlorophyll fluorescence measurements: heat gain from solar radiation

    International Nuclear Information System (INIS)

    Marler, T.E.; Lawton, P.D.

    1994-01-01

    Temperature and chlorophyll fluorescence characteristics were determined on leaves of various horticultural species following a dark adaptation period where dark adaptation cuvettes were shielded from or exposed to solar radiation. In one study, temperature of Swietenia mahagoni (L.) Jacq. leaflets within cuvettes increased from approximately 36C to approximately 50C during a 30-minute exposure to solar radiation. Alternatively, when the leaflets and cuvettes were shielded from solar radiation, leaflet temperature declined to 33C in 10 to 15 minutes. In a second study, 16 horticultural species exhibited a lower variable: maximum fluorescence (F v :F m ) when cuvettes were exposed to solar radiation during the 30-minute dark adaptation than when cuvettes were shielded. In a third study with S. mahagoni, the influence of self-shielding the cuvettes by wrapping them with white tape, white paper, or aluminum foil on temperature and fluorescence was compared to exposing or shielding the entire leaflet and cuvette. All of the shielding methods reduced leaflet temperature and increased the F v :F m ratio compared to leaving cuvettes exposed. These results indicate that heat stress from direct exposure to solar radiation is a potential source of error when interpreting chlorophyll fluorescence measurements on intact leaves. Methods for moderating or minimizing radiation interception during dark adaptation are recommended. (author)

  10. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    Science.gov (United States)

    Sweeney, R. M.; Frassinetti, L.; Brunsell, P.; Fridström, R.; Volpe, F. A.

    2016-12-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of m/n  =  1/-12, where m and n are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the modified Rutherford equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e.g. based on the best EF-cancelling RMP, resulting in the fastest TM rotation). A passive variant of this technique is also presented, where no RMPs are applied, and the EF phase is deduced.

  11. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    Directory of Open Access Journals (Sweden)

    Martine Coene

    2015-01-01

    Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.

  12. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  13. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  14. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  15. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  16. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  17. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    Science.gov (United States)

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission

  18. Z-boson-exchange contributions to the luminosity measurements at LEP and c.m.s.-energy-dependent theoretical errors

    International Nuclear Information System (INIS)

    Beenakker, W.; Martinez, M.; Pietrzyk, B.

    1995-02-01

    The precision of the calculation of Z-boson-exchange contributions to the luminosity measurements at LEP is studied for both the first and second generation of LEP luminosity detectors. It is shown that the theoretical errors associated with these contributions are sufficiently small so that the high-precision measurements at LEP, based on the second generation of luminosity detectors, are not limited. The same is true for the c.m.s.-energy-dependent theoretical errors of the Z line-shape formulae. (author) 19 refs.; 3 figs.; 7 tabs

  19. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  20. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  1. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    Science.gov (United States)

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  2. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    Directory of Open Access Journals (Sweden)

    Greg A. Breed

    2015-08-01

    Full Text Available Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm, this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  3. Sensitivity of SWOT discharge algorithm to measurement errors: Testing on the Sacramento River

    Science.gov (United States)

    Durand, Micheal; Andreadis, Konstantinos; Yoon, Yeosang; Rodriguez, Ernesto

    2013-04-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on the sensitivity of the algorithm accuracy to the uncertainty in AirSWOT measurements of height, width, and slope.

  4. Response of residential electricity demand to price: The effect of measurement error

    International Nuclear Information System (INIS)

    Alberini, Anna; Filippini, Massimo

    2011-01-01

    In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: → Updated information on price elasticities for the US energy policy. → Taking into account measurement error in the price variable increase price elasticity. → Room for discouraging residential electricity consumption using price increases.

  5. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  6. Response of residential electricity demand to price: The effect of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Alberini, Anna [Department of Agricultural Economics, University of Maryland (United States); Centre for Energy Policy and Economics (CEPE), ETH Zurich (Switzerland); Gibson Institute and Institute for a Sustainable World, School of Biological Sciences, Queen' s University Belfast, Northern Ireland (United Kingdom); Filippini, Massimo, E-mail: mfilippini@ethz.ch [Centre for Energy Policy and Economics (CEPE), ETH Zurich (Switzerland); Department of Economics, University of Lugano (Switzerland)

    2011-09-15

    In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: > Updated information on price elasticities for the US energy policy. > Taking into account measurement error in the price variable increase price elasticity. > Room for discouraging residential electricity consumption using price increases.

  7. A study on fatigue measurement of operators for human error prevention in NPPs

    International Nuclear Information System (INIS)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young

    2012-01-01

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and management

  8. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  9. Method of high precision interval measurement in pulse laser ranging system

    Science.gov (United States)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  10. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    Science.gov (United States)

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  11. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  12. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  13. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R

    2010-01-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  14. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily