Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Geometric errors measurement for coordinate measuring machines
Pan, Fangyu; Nie, Li; Bai, Yuewei; Wang, Xiaogang; Wu, Xiaoyan
2017-08-01
Error compensation is a good choice to improve Coordinate Measuring Machines’ (CMM) accuracy. In order to achieve the above goal, the basic research is done. Firstly, analyzing the error source which finds out 21 geometric errors affecting CMM’s precision seriously; secondly, presenting the measurement method and elaborating the principle. By the experiment, the feasibility is validated. Therefore, it lays a foundation for further compensation which is better for CMM’s accuracy.
Errors in Chemical Sensor Measurements
Directory of Open Access Journals (Sweden)
Artur Dybko
2001-06-01
Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.
Correction of errors in power measurements
DEFF Research Database (Denmark)
Pedersen, Knud Ole Helgesen
1998-01-01
Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....
Hearn, Chase P.; Bradshaw, Edward S.
1991-05-01
High-Q lumped and distributed networks near resonance are generally modeled as elementary three element RLC circuits. The widely used Q-circle measurement technique is based on this assumption. It is shown that this assumption can lead to errors when measuring the Q-factor of more complex resonators, particularly when heavily loaded by the external source. In the Q-circle technique, the resonator is assumed to behave as a pure series (or parallel) RLC circuit and the intercept frequencies are found experimentally at which the components of impedance satisfy the absolute value of Im(Z) = Re(Z) (unloaded Q) and absolute value of Im(Z) = Ro+Re(Z) (loaded Q). The Q-factor is then determined as the ratio of the resonant frequency to the intercept bandwidth. This relationship is exact for simple series or parallel RLC circuits, regardless of the Q-factor, but not for more complex circuits. This is shown to be due to the fact that the impedance components of the circuit vary with frequency differently from those in a pure series RLC circuit, causing the Q-factor as determined above to be in error.
Measurement Error and Equating Error in Power Analysis
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Protecting weak measurements against systematic errors
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-01-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution, and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe u...
Measurement error in a single regressor
Meijer, H.J.; Wansbeek, T.J.
2000-01-01
For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,
Impact of Measurement Error on Synchrophasor Applications
Energy Technology Data Exchange (ETDEWEB)
Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
HWVP submerged bed scrubber waste treatment by ion exchange at high pH
Energy Technology Data Exchange (ETDEWEB)
Bray, L.A.; Carson, K.J.; Elovich, R.J.; Eakin, D.E.
1996-03-01
The Hanford Waste Vitrification Plant (HWVP) is expected to produce aqueous waste streams that will require further processing for cesium, strontium, and transuranic (TRU) removal prior to incorporation into grout. Fluor Daniel, Inc. has recommended that zeolite be added to these waste streams for adsorption of cesium (Cs) and strontium (Sr) following pH adjustment by sodium hydroxide (NAOH) addition. Filtration will then used to remove the TRU elements associated with the process solids and the zeolite containing the Cs and Sr.
Energy Technology Data Exchange (ETDEWEB)
Wiemers, K.D.; Langowski, M.H.; Powell, M.R.; Larson, D.E.
1996-03-01
The Hanford Waste Vitrification Plant (HWVP) is being designed for the Department of Energy to immobilize pretreated radioactive high-level waste and transuranic waste as glass for permanent disposal. Laboratory studies were conducted to characterize HWVP slurry chemistry during selected processing steps, using pretreated Neutralized Current Acid Waste (NCAW) simulant. Laboratory tests were designed to provide bases for determining the potential for hazardous gas generation, making chemical adjustments for glass redox control, and assessing the potential for rapid exothermic reactions of dried NCAW slurry. Offgas generation rates and the total moles of gas released as a function of selected pretreated NCAW components and process variables were measured. An emphasis was placed on identifying conditions that initiate significant H{sub 2} generation. Glass redox measurements, using Fe{sup +2}/{Sigma}Fe as an indicator of the glass oxidation state, were made to develop guidelines for HCOOH addition. Thermal analyses of dried NCAW simulant were conducted to assess the potential of a rapid uncontrollable exothermic reaction in the chemical processing cell tanks.
Influence of measurement error on Maxwell's demon
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
Quantifying and handling errors in instrumental measurements using the measurement error theory
DEFF Research Database (Denmark)
Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.
2003-01-01
. This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x-measurements......Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species...... is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...
Measuring Systematic Error with Curve Fits
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model.1-3 In this paper I give three examples in which my students use popular curve-fitting software and adjust the theoretical model to account for, and even exploit, the presence of systematic errors in measured data.
Measurement Error in Education and Growth Regressions*
Portela, Miguel; Alessie, Rob; Teulings, Coen
2010-01-01
The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these
MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES
Directory of Open Access Journals (Sweden)
O. Icasio-Hernández
2017-09-01
Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.
Measurement Error with Different Computer Vision Techniques
Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.
2017-09-01
The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.
Direction of dependence in measurement error models.
Wiedermann, Wolfgang; Merkle, Edgar C; von Eye, Alexander
2017-09-05
Methods to determine the direction of a regression line, that is, to determine the direction of dependence in reversible linear regression models (e.g., x→y vs. y→x), have experienced rapid development within the last decade. However, previous research largely rested on the assumption that the true predictor is measured without measurement error. The present paper extends the direction dependence principle to measurement error models. First, we discuss asymmetric representations of the reliability coefficient in terms of higher moments of variables and the attenuation of skewness and excess kurtosis due to measurement error. Second, we identify conditions where direction dependence decisions are biased due to measurement error and suggest method of moments (MOM) estimation as a remedy. Third, we address data situations in which the true outcome exhibits both regression and measurement error, and propose a sensitivity analysis approach to determining the robustness of direction dependence decisions against unreliably measured outcomes. Monte Carlo simulations were performed to assess the performance of MOM-based direction dependence measures and their robustness to violated measurement error assumptions (i.e., non-independence and non-normality). An empirical example from subjective well-being research is presented. The plausibility of model assumptions and links to modern causal inference methods for observational data are discussed. © 2017 The British Psychological Society.
Protecting weak measurements against systematic errors
Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.
2016-07-01
In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.
Assessing Measurement Error in Medicare Coverage
U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...
Measurement error in longitudinal film badge data
Marsh, J L
2002-01-01
Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...
Error Separation for Wide Area Film Measurement
Directory of Open Access Journals (Sweden)
Shujie LIU
2014-09-01
Full Text Available We wanted to use a multiple probes and white light interferometer to measure the surface profile of thin film. However, this system, as assessed with a scanning method, suffers from the presence of a moving stage and systematic sensor errors. In this paper, in order to separate measurement error caused by the moving stage and systematic sensor errors, the least squares analysis is applied to achieve self-calibration in the measurement process. The modeling principle and resolution process of the least squares analysis with multiple probes and autocollimator are introduced and the corresponding theory uncertainty calculation method is also given. Using this method, we analysis the experimental data and obtain a shape close to the real file. Contrasting with the actual value, the bias and uncertainty in the case of different number of probes are discussed. The results demonstrated the feasibility of the constructed multi-ball cantilever system with the autocollimator for measuring thin film with high accuracy.
Nonclassical measurements errors in nonlinear models
DEFF Research Database (Denmark)
Madsen, Edith; Mulalic, Ismir
around zero and thicker tails than a normal distribution. In a linear regression model where the explanatory variable is measured with error it is well-known that this gives a downward bias in the absolute value of the corresponding regression parameter (attenuation), Friedman (1957). In non......-linear models it is more difficult to obtain an expression for the bias as it depends on the distribution of the true underlying variable as well as the error distribution. Chesher (1991) give some approximations to very general non-linear models and Stefanski & Carroll (1985) in the logistic regression model...... and the distribution of the underlying true income is skewed then there are valid technical instruments. We investigate how this IV estimation approach works in theory and illustrate it by simulation studies using the findings about the measurement error model for income from the NTS....
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.
Accurate test limits under nonnormal measurement error
Albers, Willem/Wim; Kallenberg, W.C.M.; Otten, G.D.
1998-01-01
When screening a production process for nonconforming items the objective is to improve the average outgoing quality level. Due to measurement errors specification limits cannot be checked directly and hence test limits are required, which meet some given requirement, here given by a prescribed
Application of Uniform Measurement Error Distribution
2016-03-18
specific distribution and the associated joint probability density function ( PDF ). Then, assuming uniformly distributed measurement errors, we will try...PFA), Probability of False Reject (PFR). 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 15 19a. NAME...calibration tolerance limits but the difference of the observed measurement results of the UUT and the Calibration Standard (CalStd or CAL) is within
Measurement error in longitudinal film badge data
Energy Technology Data Exchange (ETDEWEB)
Marsh, J.L
2002-04-01
The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study
Letter report: Evaluation of LFCM off-gas system technologies for the HWVP
Energy Technology Data Exchange (ETDEWEB)
Goles, R.W.; Mishima, J.; Schmidt, A.J.
1996-03-01
Radioactive high-level liquid waste (HLLW), a byproduct of defense nuclear fuel reprocessing activities, is currently being stored in underground tanks at several US sites. Because its mobility poses significant environmental risks, HLLW is not a suitable waste form for long-term storage. Thus, high-temperature processes for solidifying and isolating the radioactive components of HLLW have been developed and demonstrated by the US Department of Energy (DOE) and its contractors. Vitrification using liquidfed ceramic melters (LFCMs) is the reference process for converting US HLLW into a borosilicate glass. Two vitrification plants are currently under construction in the United States: the West Valley Demonstration Plant (WVDP) being built at the former West Valley Nuclear Fuels Services site in West Valley, New York; and the Defense Waste Processing Facility (DWPF), which is currently 85% complete at DOE`s Savannah River Plant (SRP). A third facility, the Hanford Waste Vitrification Plant (HWVP), is being designed at DOE`s Hanford Site.
Feedback cooling, measurement errors, and entropy production
Munakata, T.; Rosinberg, M. L.
2013-06-01
The efficiency of a feedback mechanism depends on the precision of the measurement outcomes obtained from the controlled system. Accordingly, measurement errors affect the entropy production in the system. We explore this issue in the context of active feedback cooling by modeling a typical cold damping setup as a harmonic oscillator in contact with a heat reservoir and subjected to a velocity-dependent feedback force that reduces the random motion. We consider two models that distinguish whether the sensor continuously measures the position of the resonator or directly its velocity (in practice, an electric current). Adopting the standpoint of the controlled system, we identify the ‘entropy pumping’ contribution that describes the entropy reduction due to the feedback control and that modifies the second law of thermodynamics. We also assign a relaxation dynamics to the feedback mechanism and compare the apparent entropy production in the system and the heat bath (under the influence of the controller) to the total entropy production in the super-system that includes the controller. In this context, entropy pumping reflects the existence of hidden degrees of freedom and the apparent entropy production satisfies fluctuation theorems associated with an effective Langevin dynamics.
Adjusting for the Incidence of Measurement Errors in Multilevel ...
African Journals Online (AJOL)
In the face of seeming dearth of objective methods of estimating measurement error variance and realistically adjusting for the incidence of measurement errors in multilevel models, researchers often indulge in the traditional approach of arbitrary choice of measurement error variance and this has the potential of giving ...
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Incorporating measurement error in n=1 psychological autoregressive modeling
Schuurman, Noemi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Measuring Systematic Error with Curve Fits
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
The combined measurement and compensation technology for robot motion error
Li, Rui; Qu, Xinghua; Deng, Yonggang; Liu, Bende
2013-10-01
Robot parameter errors are mainly caused by the kinematic parameter errors and the moving angle errors. The calibration of the kinematic parameter errors and the regularity of each axis moving angle errors are mainly researched in this paper. The errors can be compensated by the error model through pre-measurement. So robot kinematic system accuracy can be improved in the case where there are no external devices for real-time measurement. Combination measuring system which is based on the laser tracker and the biaxial orthogonal inertial measuring instrument is designed and built in the paper. The laser tracker is used to build the robot kinematic parameter error model which is based on the minimum constraint of distance error. The biaxial orthogonal inertial measuring instrument is used to obtain the moving angle error model of each axis. The model is preset when the robot is moving in the predetermined path to get the exam movement error and the compensation quantity is feedback to robot controller module of moving axis to compensation the angle. The robot kinematic parameter calibration bases on distance error model and the distribution law of each axis movement error are discussed in this paper. The laser tracker is applied to prove that the method can effectively improve the control accuracy of the robot system.
Error Averaging Effect in Parallel Mechanism Coordinate Measuring Machine
Directory of Open Access Journals (Sweden)
Peng-Hao Hu
2016-11-01
Full Text Available Error averaging effect is one of the advantages of a parallel mechanism when individual errors are relatively large. However, further investigation is necessary to clarify the evidence with mathematical analysis and experiment. In the developed parallel coordinate measuring machine (PCMM, which is based on three pairs of prismatic-universal-universal joints (3-PUU, error averaging mechanism was investigated and is analyzed in this report. Firstly, the error transfer coefficients of various errors in the PCMM were studied based on the established error transfer model. It can be shown how the various original errors in the parallel mechanism are averaged and reduced. Secondly, experimental measurements were carried out, including angular errors and straightness errors of three moving sliders. Lastly, solving the inverse kinematics by numerical method of iteration, it can be seen that the final measuring errors of the moving platform of PCMM can be reduced by the error averaging effect in comparison with the attributed geometric errors of three moving slides. This study reveals the significance of the error averaging effect for a PCMM.
Rapid mapping of volumetric machine errors using distance measurements
Energy Technology Data Exchange (ETDEWEB)
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Color speckle measurement errors using system with XYZ filters
Kinoshita, Junichi; Yamamoto, Kazuhisa; Kuroda, Kazuo
2017-09-01
Measurement errors of color speckle are analyzed for a measurement system equipped with revolving XYZ filters and a 2D sensor. One of the errors is caused by the filter characteristics unfitted to the ideal color matching functions. The other is caused by uncorrelations among the optical paths via the XYZ filters. The unfitted color speckle errors of all the pixel data can be easily calibrated by conversion between the measured BGR chromaticity triangle and the true triangle obtained by the BGR wavelength measurements. For the uncorrelated errors, the measured BGR chromaticity values spread over around the true values. As a result, it would be more complicated to calibrate the uncorrelated errors, repeating the triangular conversion pixel by pixel. Color speckle and its errors greatly affect also chromaticity measurements and image quality of displays using coherent light sources.
Error analysis of sensor measurements in a small UAV
Ackerman, James S.
2005-01-01
This thesis focuses on evaluating the measurement errors in the gimbal system of the SUAV autonomous aircraft developed at NPS. These measurements are used by the vision based target position estimation system developed at NPS. Analysis of the errors inherent in these measurements will help direct future investment in better sensors to improve the estimation system's performance.
Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters
Directory of Open Access Journals (Sweden)
Bin Chen
2017-03-01
Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Energy Technology Data Exchange (ETDEWEB)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
Pressure Change Measurement Leak Testing Errors
Energy Technology Data Exchange (ETDEWEB)
Pryor, Jeff M [ORNL; Walker, William C [ORNL
2014-01-01
A pressure change test is a common leak testing method used in construction and Non-Destructive Examination (NDE). The test is known as being a fast, simple, and easy to apply evaluation method. While this method may be fairly quick to conduct and require simple instrumentation, the engineering behind this type of test is more complex than is apparent on the surface. This paper intends to discuss some of the more common errors made during the application of a pressure change test and give the test engineer insight into how to correctly compensate for these factors. The principals discussed here apply to ideal gases such as air or other monoatomic or diatomic gasses; however these same principals can be applied to polyatomic gasses or liquid flow rate with altered formula specific to those types of tests using the same methodology.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Directory of Open Access Journals (Sweden)
Xiao-Feng Wang
2011-03-01
Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.
Triphasic MRI of pelvic organ descent: sources of measurement error
Energy Technology Data Exchange (ETDEWEB)
Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)
2005-05-01
Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.
Cui, Cunxing; Feng, Qibo; Zhang, Bin
2015-04-10
The straightness measurement systematic errors induced by error crosstalk, fabrication and installation deviation of optical element, measurement sensitivity variation, and the Abbe error in six degree-of-freedom simultaneous measurement system are analyzed in detail in this paper. Models for compensating these systematic errors were established and verified through a series of comparison experiments with the Automated Precision Inc. (API) 5D measurement system, and the experimental results showed that the maximum deviation in straightness error measurement could be reduced from 6.4 to 0.9 μm in the x-direction, and 8.8 to 0.8 μm in the y-direction, after the compensation.
Correlated measurement error hampers association network inference
Kaduk, M.; Hoefsloot, H.C.J.; Vis, D.J.; Reijmers, T.; Greef, J. van der; Smilde, A.K.; Hendriks, M.M.W.B.
2014-01-01
Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the
Valuation Biases, Error Measures, and the Conglomerate Discount
I. Dittmann (Ingolf); E.G. Maug (Ernst)
2006-01-01
textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the
Measurement errors in cirrus cloud microphysical properties
Directory of Open Access Journals (Sweden)
H. Larsen
Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.
Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques
Directory of Open Access Journals (Sweden)
Claudia Lamina
Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.
Measuring worst-case errors in a robot workcell
Energy Technology Data Exchange (ETDEWEB)
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
Measurement error caused by spatial misalignment in environmental epidemiology.
Gryparis, Alexandros; Paciorek, Christopher J; Zeka, Ariana; Schwartz, Joel; Coull, Brent A
2009-04-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area.
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
Directory of Open Access Journals (Sweden)
David Ayllón
Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.
Saqr, Anwar; Khan, Shahjahan
2017-05-01
This paper introduces a statistical method to estimate the parameters of bivariate structural errors-in-variables model (EIV). It is a complex problem when there is no or uncertain prior knowledge of the measurement errors variances. The proposed estimators of the parameters of EIV model are derived based on mathematical modification method for observed data. This method is suggested to reproduce an explanatory variable that has equivalent statistical characteristics of the unobserved explanatory variable, and to correct for the effects of measurement error in predictors. The proposed method produce robust estimators, and it is straightforward, easy to implement, and takes into account the equation errors. The simulation studies show that the new estimator to be generally more efficient and less biased than some other previous approaches. Compared to the maximum likelihood method via the simulation studies, the estimators of the proposed method are nearly asymptotically unbiased and efficient when there is no or uncertain prior knowledge of the measurement errors variances. The numerical comparisons of the simulation studies results are included. In addition, results are illustrated with applications on one well-known real data sets of serum kanamycin.
An in-situ measuring method for planar straightness error
Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie
2018-01-01
According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.
QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS
NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P
1992-01-01
This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic
Measurement error of waist circumference: Gaps in knowledge
Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van
2013-01-01
Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To
Measurement error of waist circumference: gaps in knowledge
Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; van Mechelen, W.
2013-01-01
Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To
Assessment of salivary flow rate: biologic variation and measure error.
Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.
2004-01-01
OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated
Measurement errors with low-cost citizen science radiometers
Bardají, R.; Piera Fernández, Jaume
2016-01-01
The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy
David Ayllón; Roberto Gil-Pita; Fernando Seoane
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measur...
Measuring the severity of prescribing errors: a systematic review.
Garfield, Sara; Reynolds, Matthew; Dermont, Liesbeth; Franklin, Bryony Dean
2013-12-01
Prescribing errors are common. It has been suggested that the severity as well as the frequency of errors should be assessed when measuring prescribing error rates. This would provide more clinically relevant information, and allow more complete evaluation of the effectiveness of interventions designed to reduce errors. The objective of this systematic review was to describe the tools used to assess prescribing error severity in studies reporting hospital prescribing error rates. The following databases were searched: MEDLINE, EMBASE, International Pharmaceutical Abstracts, and CINAHL (January 1985-January 2013). We included studies that reported the detection and rate of prescribing errors in prescriptions for adult and/or pediatric hospital inpatients, or elaborated on the properties of severity assessment tools used by these studies. Studies not published in English, or that evaluated errors for only one disease or drug class, one route of administration, or one type of prescribing error, were excluded, as were letters and conference abstracts. One reviewer screened all abstracts and obtained complete articles. A second reviewer assessed 10 % of all abstracts and complete articles to check reliability of the screening process. Tools were appraised for country and method of development, whether the tool assessed actual or potential harm, levels of severity assessed, and results of any validity and reliability studies. Fifty-seven percent of 107 studies measuring prescribing error rates included an assessment of severity. Forty tools were identified that assessed severity, only two of which had acceptable reliability and validity. In general, little information was given on the method of development or ease of use of the tools, although one tool required four reviewers and was thus potentially time consuming. The review was limited to studies written in English. One of the review authors was also the author of one of the tools, giving a potential source of bias
Measurement error models for survey statistics and economic archaeology
Groß, Marcus
2016-01-01
The present work is concerned with so-called measurement error models in applied statistics. The data were analyzed and processed from two very different fields. On the one hand survey and register data, which are used in the Survey statistics and on the other hand anthropological data on prehistoric skeletons. For both fields the problem arises that some variables cannot be measured with sufficient accuracy. This can be due to privacy or measuring inaccuracies. This circumstance can be summa...
Cumulative Measurement Errors for Dynamic Testing of Space Flight Hardware
Winnitoy, Susan
2012-01-01
measurements during hardware motion and contact. While performing dynamic testing of an active docking system, researchers found that the data from the motion platform, test hardware and two external measurement systems exhibited frame offsets and rotational errors. While the errors were relatively small when considering the motion scale overall, they substantially exceeded the individual accuracies for each component. After evaluating both the static and dynamic measurements, researchers found that the static measurements introduced significantly more error into the system than the dynamic measurements even though, in theory, the static measurement errors should be smaller than the dynamic. In several cases, the magnitude of the errors varied widely for the static measurements. Upon further investigation, researchers found the larger errors to be a consequence of hardware alignment issues, frame location and measurement technique whereas the smaller errors were dependent on the number of measurement points. This paper details and quantifies the individual and cumulative errors of the docking system and describes methods for reducing the overall measurement error. The overall quality of the dynamic docking tests for flight hardware verification was improved by implementing these error reductions.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Reliability and measurement error of 3-dimensional regional lumbar motion measures
DEFF Research Database (Denmark)
Mieritz, Rune M; Bronfort, Gert; Kawchuk, Greg
2012-01-01
The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems.......The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems....
Bayesian modeling of measurement error in predictor variables
Fox, Gerardus J.A.; Glas, Cornelis A.W.
2003-01-01
It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Consistent estimation of linear panel data models with measurement error
Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas
2017-01-01
Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the
GMM estimation in panel data models with measurement error
Wansbeek, T.J.
Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.
Comparing measurement errors for formants in synthetic and natural vowels.
Shadle, Christine H; Nam, Hosung; Whalen, D H
2016-02-01
The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry.
Chen, Benyong; Xu, Bin; Yan, Liping; Zhang, Enzheng; Liu, Yanna
2015-04-06
A laser straightness interferometer system with rotational error compensation and simultaneous measurement of six degrees of freedom error parameters is proposed. The optical configuration of the proposed system is designed and the mathematic model for simultaneously measuring six degrees of freedom parameters of the measured object including three rotational parameters of the yaw, pitch and roll errors and three linear parameters of the horizontal straightness error, vertical straightness error and straightness error's position is established. To address the influence of the rotational errors produced by the measuring reflector in laser straightness interferometer, the compensation method of the straightness error and its position is presented. An experimental setup was constructed and a series of experiments including separate comparison measurement of every parameter, compensation of straightness error and its position and simultaneous measurement of six degrees of freedom parameters of a precision linear stage were performed to demonstrate the feasibility of the proposed system. Experimental results show that the measurement data of the multiple degrees of freedom parameters obtained from the proposed system are in accordance with those obtained from the compared instruments and the presented compensation method can achieve good effect in eliminating the influence of rotational errors on the measurement of straightness error and its position.
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Error in total ozone measurements arising from aerosol attenuation
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
DEFF Research Database (Denmark)
Chen, Yangyang; Yang, Ming; Long, Jiang
2017-01-01
and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation......For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...
PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.
Energy Technology Data Exchange (ETDEWEB)
PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.
1999-03-29
All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.
Test-Cost-Sensitive Attribute Reduction of Data with Normal Distribution Measurement Errors
Hong Zhao; Fan Min; William Zhu
2013-01-01
The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making based on attribute values of objects, we shall select an attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distrib...
Measurement error in CT assessment of appendix diameter
Energy Technology Data Exchange (ETDEWEB)
Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)
2016-12-15
Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)
Error reduction techniques for measuring long synchrotron mirrors
Energy Technology Data Exchange (ETDEWEB)
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
Functional multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Zoh, Roger S; Bazer, Fuller W; Wu, Guoyao; Carroll, Raymond J
2017-05-08
Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body. Therefore, metabolic rate can be approximated by heat production plus some errors. While energy expenditure and metabolic rates are correlated, they are not equivalent. Energy expenditure results from physical function, while metabolism can occur within the body without the occurrence of physical activities. In this manuscript, we present a novel approach for studying the relationship between metabolic rate and indicators of energy expenditure. We do so by extending our previous work on MIMIC ME models to allow responses that are sparsely observed functional data, defining the sparse functional multiple indicators, multiple cause measurement error (FMIMIC ME) models. The mean curves in our proposed methodology are modeled using basis splines. A novel approach for estimating the variance of the classical measurement error based on functional principal components is presented. The model parameters are estimated using the EM algorithm and a discussion of the model's identifiability is provided. We show that the defined model is not a trivial extension of longitudinal or functional data methods, due to the presence of the latent construct. Results from its application to data collected on Zucker diabetic fatty rats are provided. Simulation results investigating the properties of our approach are also presented. © 2017, The International Biometric Society.
Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.
Song, Xiao; Wang, Ching-Yun
2014-12-01
In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.
Longitudinal changes in cardiorespiratory fitness: measurement error or true change?
Jackson, Andrew S; Kampert, James B; Barlow, Carolyn E; Morrow, James R; Church, Timothy S; Blair, Steven N
2004-07-01
This study examined the thesis that the reported Aerobics Center Longitudinal Study (ACLS) mortality reductions associated with improved cardiorespiratory fitness were because of measurement error of serial treadmill tests. We tested the research hypothesis that longitudinal changes in cardiorespiratory fitness of the ACLS cohort were a multivariate function of changes in self-report physical activity (SR-PA), resting heart rate, and body mass index (BMI). We used the results of three serial maximal treadmill tests (T1, T2, and T3) to evaluate the serial changes in cardiorespiratory fitness of 4675 men. The mean duration between the three serial tests examined was: T2 - T1, 1.9 yr; T3 - T2, 6.1 yr; and T3 - T1, 8.0 yr. Maximum and resting heart rate, BMI, SR-PA, and maximum Balke treadmill duration were measured on each occasion. General linear models analysis showed that with change in maximum heart rate statistically controlled change in treadmill time performance was a function of independent changes in SR-PA, BMI, and R-HR. These variables accounted for significant (P heart rate gained the most fitness between serial tests. These results support the research hypothesis tested. Variations in serial ACLS treadmill tests are not just due to measurement error alone, but also to systematic variation linked with changes in lifestyle.
Development of an Abbe Error Free Micro Coordinate Measuring Machine
Directory of Open Access Journals (Sweden)
Qiangxian Huang
2016-04-01
Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.
Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling
DEFF Research Database (Denmark)
Marinello, F.; Voltan, A.; Savio, E.
2010-01-01
This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
On Characterization of Elasticity Parameters in Context of Measurement Errors
Slawinski, M. A.
2007-12-01
In this presentation, we discuss the one-to-one relation between the elasticity parameters and the traveltime and polarization of a propagating signal in the context of the measurement errors. The one-to-one relationship between seismic measurements and a model postulated in the realm of the constitutive equation of an elastic continuum provides the link between the observational and theoretical aspects of seismic tomography [1]. The existence of this link encourages us to develop methods of inferring the elasticity parameters from measurements. However, a consideration of required accuracy and the analysis of error sensitivity suggest that the pragmatic application of this one-to-one relationship might be a difficult task indeed [4]. There are eight symmetry classes of an elastic continuum whose properties are contained in the density-scaled elasticity tensor [6]. Given this tensor in an arbitrary coordinate system, we can identify to which symmetry class it belongs, as well as obtain the orientation of its symmetry axes and planes, and hence the elasticity parameters in a natural coordinate system [2]. To obtain the tensor to be studied, we consider either ray velocities and polarizations [1] or wavefront slownesses and polarizations [5]. For the former, we assume that the medium is homogeneous in order to invoke the straightness of rays to calculate ray velocity given the source and receiver position; for the latter, we assume that the medium is homogeneous in at least one direction in order to invoke the ray parameter. In spite of the limitations due to homogeneities, both approaches are sensitive to measurement errors, which are not negligible. In view of these observational concerns [4], we consider several weaker objectives based on the theoretical formulation. Rather than distinguishing among eight symmetry classes and obtaining the corresponding elasticity parameters, we might be able to distinguish among a few groups that contain several classes within
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Measurement error as a source of QT dispersion: a computerised analysis
J.A. Kors (Jan); G. van Herpen (Gerard)
1998-01-01
textabstractOBJECTIVE: To establish a general method to estimate the measuring error in QT dispersion (QTD) determination, and to assess this error using a computer program for automated measurement of QTD. SUBJECTS: Measurements were done on 1220 standard simultaneous
CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*
Schofield, Lynne Steuerle
2015-01-01
This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218
Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision
DEFF Research Database (Denmark)
Foote, Jonathan
2016-01-01
Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...
Assessment of Measurement Error when Using the Laser Spectrum Analyzers
Directory of Open Access Journals (Sweden)
A. A. Titov
2015-01-01
Full Text Available The article dwells on assessment of measurement errors when using the laser spectrum analyzers. It presents the analysis results to show that it is possible to carry out a spectral analysis of both amplitudes and phases of frequency components of signals and to analyze a changing phase of frequency components of radio signals using interferential methods of measurements. It is found that the interferometers with Mach-Zehnder arrangement are most widely used for measurement of signal phase. A possibility to increase resolution when using the combined method as compared to the other considered methods is shown since with its application spatial integration is performed over one coordinate while time integration is done over the other coordinate that is reached by the orthogonal arrangement of modulators relative each other. The article defines a drawback of this method. It is complicatedness and low-speed because of integrator that disables measurement of spectral components of a radio pulse if its width is less than a temporary aperture. There is a proposal to create an advanced option of the spectrum analyzer in which phase is determined through the signal processing. The article presents resolution when using such a spectrum analyzer. It also reviews the possible options for creating devices to measure the phase components of a spectrum depending on the methods applied to measure a phase. The analysis has shown that for phase measurement a time-pulse method is the most perspective. It is found that the known circuits of digital phase-meters using this method cannot be directly used in spectrum analyzers as they are designed for measurement of the phase only of one signal frequency. In this regard a number of circuits were developed to measure the amplitude and phase of frequency components of the radio signal. It is shown that the perspective option of creating a spectrum analyzer is device in which the phase is determined through the signal
Bayesian modeling of measurement error in predictor variables using item response theory
Fox, Gerardus J.A.; Glas, Cornelis A.W.
2000-01-01
This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved
Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow
DEFF Research Database (Denmark)
Mohr, Johan Jacob; Madsen, Søren Nørvang
1999-01-01
and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....
Lower extremity angle measurement with accelerometers - error and sensitivity analysis
Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.
1991-01-01
The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found
Pivot and cluster strategy: a preventive measure against diagnostic errors.
Shimizu, Taro; Tokuda, Yasuharu
2012-01-01
Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS), encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1) and analytical process (System 2) in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2) of an initial diagnosis made by the physicians' intuitive process (System 1), thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.
Adjusting for the Incidence of Measurement Errors in Multilevel ...
African Journals Online (AJOL)
-prone explanatory variables and adjusts for the incidence of these errors giving rise to more adequate multilevel models. 2.0 Methodology. 2.1. Data Structure. The illustrative data employed was drawn from an educational environment. There.
Guo, Cheng; Tan, Jiubin; Liu, Zhengjun
2015-08-01
An iterative structure of amplitude-phase retrieval (APR) was proved to obtain more accurate reconstructed data of both amplitude and phase. However, there is not enough analysis of the precise influence from position measurement error and corresponding error correction. We apply the APR in fractional Fourier domains to reconstruct a sample image and describe the corresponding optical implementation. The error model is built to discuss the distribution of the position measurement error. A corrective method is applied to amend the error and obtain a better quality of retrieved image. The numerical results have demonstrated that our methods are feasible and useful to correct the error for various circumstances.
Comparing methods to measure error in gynecologic cytology and surgical pathology.
Renshaw, Andrew A
2006-05-01
Both gynecologic cytology and surgical pathology use similar methods to measure diagnostic error, but differences exist between how these methods have been applied in the 2 fields. To compare the application of methods of error detection in gynecologic cytology and surgical pathology. Review of the literature. There are several different approaches to measuring error, all of which have limitations. Measuring error using reproducibility as the gold standard is a common method to determine error. While error rates in gynecologic cytology are well characterized and methods for objectively assessing error in the legal setting have been developed, meaningful methods to measure error rates in clinical practice are not commonly used and little is known about the error rates in this setting. In contrast, in surgical pathology the error rates are not as well characterized and methods for assessing error in the legal setting are not as well defined, but methods to measure error in actual clinical practice have been characterized and preliminary data from these methods are now available concerning the error rates in this setting.
Pivot and cluster strategy: a preventive measure against diagnostic errors
Directory of Open Access Journals (Sweden)
Shimizu T
2012-11-01
Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing
Directory of Open Access Journals (Sweden)
Ariel Linden
2015-01-01
Full Text Available The patient activation measure (PAM is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve, using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum and 44.4% (maximum. Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.
Directory of Open Access Journals (Sweden)
POP Septimiu
2012-05-01
Full Text Available This paper is focused on the effect produced by systematic error of measurement devices in monitoring of a system, dam. The effect produced by systematic error in dam monitoring consist in a wrongdescription of dam evolution. Measurement errors lead in a deflection of the dam from the normal evolution. The physical parameter, inclination, needs to be measured with an accuracy of 0.05%. The sensor used is a full differential output voltage. In a measurementdevice an error source is the electronic components imperfections. The performance of measurement instruments depend on resistance tolerance. The error produced by tolerance on a measurement device is a systematic error and in monitoring process become a random error. The measure of transducer with Wheatstone-bridge supposes to use high accuracy resistance of 0.01%. But a high accuracy resistor increases the cost o instruments. The source of systematic error can be eliminated if the transducer is measured without resistance divider. To obtain positive voltage at sensor output this is power supply relative to common mode voltage of analog converter. In this casethe measurement error depends just by ADC. The acquisition is made with a differential converter. To obtain an accuracy of measurement of 0.05% is used a 14 bit converter. The ADC has auto calibration function so the offset and gain errors are internally compensated.
Study on error analysis and accuracy improvement for aspheric profile measurement
Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou
2017-06-01
Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors
Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping
2016-11-01
The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.
Study of systematic errors in the luminosity measurement
Energy Technology Data Exchange (ETDEWEB)
Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics
1993-04-01
The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).
Sensor Interaction as a Source of the Electromagnetic Field Measurement Error
Directory of Open Access Journals (Sweden)
Hartansky R.
2014-12-01
Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.
Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement
Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui
2017-01-01
Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.
Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole
2014-01-01
platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations......Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
Measurement Error in Income and Schooling and the Bias of Linear Estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
2017-01-01
We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...... and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators...... of the returns, with important implications for the program evaluation literature....
Measurement Error in Income and Schooling and the Bias of Linear Estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
2017-01-01
and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...
McManus, I C
2012-01-01
In high-stakes assessments in medical education, such as final undergraduate examinations and postgraduate assessments, an attempt is frequently made to set confidence limits on the probable true score of a candidate. Typically, this is carried out using what is referred to as the standard error of measurement (SEM). However, it is often the case that the wrong formula is applied, there actually being three different formulae for use in different situations. To explain and clarify the calculation of the SEM, and differentiate three separate standard errors, which here are called the standard error of measurement (SEmeas), the standard error of estimation (SEest) and the standard error of prediction (SEpred). Most accounts describe the calculation of SEmeas. For most purposes, though, what is required is the standard error of estimation (SEest), which has to be applied not to a candidate's actual score but to their estimated true score after taking into account the regression to the mean that occurs due to the unreliability of an assessment. A third formula, the standard error of prediction (SEpred) is less commonly used in medical education, but is useful in situations such as counselling, where one needs to predict a future actual score on an examination from a previous actual score on the same examination. The various formulae can produce predictions that differ quite substantially, particularly when reliability is not particularly high, and the mark in question is far removed from the average performance of candidates. That can have important, unintended consequences, particularly in a medico-legal context.
Holt, R. M.
2001-12-01
It has long been recognized that the spatial variability of hydraulic properties in heterogeneous geologic materials directly controls the movement of contaminants in the subsurface. Heterogeneity is typically described using spatial statistics (mean, variance, and correlation length) determined from measured properties. These spatial statistics can be used in probabilistic (stochastic) flow and transport models. We ask the question, how do measurement errors affect our ability to accurately estimate spatial statistics and reliably apply stochastic models of flow and transport? Spatial statistics of hydraulic properties can be accurately estimated when measurement errors are unbiased. Unfortunately, measurements become spatially biased (i.e., their spatial pattern is systematically distorted) when random observation errors are propagated through non-linear inversion models or inversion models incorrectly describe experimental physics. This type of bias results in distortion of the distribution and variogram of the hydraulic property and errors in stochastic model predictions. We use a Monte Carlo approach to determine the spatial bias in field- and laboratory-estimated unsaturated hydraulic properties subject to simple measurement errors. For this analysis, we simulate measurements in a series of idealized realities and consider only simple measurement errors that can be easily modeled. We find that hydraulic properties are strongly biased by small observation and inversion-model errors. This bias can lead to order-of-magnitude errors in spatial statistics and artificial cross-correlation between measured properties. We also find that measurement errors amplify uncertainty in experimental variograms and can preclude identification of variogram-model parameters. The use of biased spatial statistics in stochastic flow and transport models can yield order-of-magnitude errors in critical transport results. The effects of observation and inversion-model errors are
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model…
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports
Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary
2014-01-01
Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
Working with Error and Uncertainty to Increase Measurement Validity
Amrein-Beardsley, Audrey; Barnett, Joshua H.
2012-01-01
Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…
Sources of measurement error in laser Doppler vibrometers and proposal for unified specifications
Siegmund, Georg
2008-06-01
The focus of this paper is to disclose sources of measurement error in laser Doppler vibrometers (LDV) and to suggest specifications, suitable to describe their impact on measurement uncertainty. Measurement errors may be caused by both the optics and electronics sections of an LDV, caused by non-ideal measurement conditions or imperfect technical realisation. While the contribution of the optics part can be neglected in most cases, the subsequent signal processing chain may cause significant errors. Measurement error due to non-ideal behaviour of the interferometer has been observed mainly at very low vibration amplitudes and depending on the optical arrangement. The paper is organized as follows: Electronic signal processing blocks, beginning with the photo detector, are analyzed with respect to their contribution to measurement uncertainty. A set of specifications is suggested, adopting vocabulary and definitions known from traditional vibration measurement equipment. Finally a measurement setup is introduced, suitable for determination of most specifications utilizing standard electronic measurement equipment.
The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment
Directory of Open Access Journals (Sweden)
C. Rödenbeck
2006-01-01
Full Text Available Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability. As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ, if systematic differences (and their changes are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.
The effect of systematic measurement errors on atmospheric CO2 inversions: a quantitative assessment
Rödenbeck, C.; Conway, T. J.; Langenfelds, R. L.
2006-01-01
Surface-atmosphere exchange fluxes of CO2, estimated by an interannual atmospheric transport inversion from atmospheric mixing ratio measurements, are affected by several sources of errors, one of which is experimental errors. Quantitative information about such measurement errors can be obtained from regular co-located measurements done by different laboratories or using different experimental techniques. The present quantitative assessment is based on intercomparison information from the CMDL and CSIRO atmospheric measurement programs. We show that the effects of systematic measurement errors on inversion results are very small compared to other errors in the flux estimation (as well as compared to signal variability). As a practical consequence, this assessment justifies the merging of data sets from different laboratories or different experimental techniques (flask and in-situ), if systematic differences (and their changes) are comparable to those considered here. This work also highlights the importance of regular intercomparison programs.
Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-09-01
Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.
Detecting genotyping error using measures of degree of Hardy-Weinberg disequilibrium.
Attia, John; Thakkinstian, Ammarin; McElduff, Patrick; Milne, Elizabeth; Dawson, Somer; Scott, Rodney J; Klerk, Nicholas de; Armstrong, Bruce; Thompson, John
2010-01-01
Tests for Hardy-Weinberg equilibrium (HWE) have been used to detect genotyping error, but those tests have low power unless the sample size is very large. We assessed the performance of measures of departure from HWE as an alternative way of screening for genotyping error. Three measures of the degree of disequilibrium (alpha, ,D, and F) were tested for their ability to detect genotyping error of 5% or more using simulations and a real dataset of 184 children with leukemia genotyped at 28 single nucleotide polymorphisms. The simulations indicate that all three disequilibrium coefficients can usefully detect genotyping error as judged by the area under the Receiver Operator Characteristic (ROC) curve. Their discriminative ability increases as the error rate increases, and is greater if the genotyping error is in the direction of the minor allele. Optimal thresholds for detecting genotyping error vary for different allele frequencies and patterns of genotyping error but allele frequency-specific thresholds can be nominated. Applying these thresholds would have picked up about 90% of genotyping errors in our actual dataset. Measures of departure from HWE may be useful for detecting genotyping error, but this needs to be confirmed in other real datasets.
Sharing is caring? Measurement error and the issues arising from combining 3D morphometric datasets.
Fruciano, Carmelo; Celik, Mélina A; Butler, Kaylene; Dooley, Tom; Weisbecker, Vera; Phillips, Matthew J
2017-09-01
Geometric morphometrics is routinely used in ecology and evolution and morphometric datasets are increasingly shared among researchers, allowing for more comprehensive studies and higher statistical power (as a consequence of increased sample size). However, sharing of morphometric data opens up the question of how much nonbiologically relevant variation (i.e., measurement error) is introduced in the resulting datasets and how this variation affects analyses. We perform a set of analyses based on an empirical 3D geometric morphometric dataset. In particular, we quantify the amount of error associated with combining data from multiple devices and digitized by multiple operators and test for the presence of bias. We also extend these analyses to a dataset obtained with a recently developed automated method, which does not require human-digitized landmarks. Further, we analyze how measurement error affects estimates of phylogenetic signal and how its effect compares with the effect of phylogenetic uncertainty. We show that measurement error can be substantial when combining surface models produced by different devices and even more among landmarks digitized by different operators. We also document the presence of small, but significant, amounts of nonrandom error (i.e., bias). Measurement error is heavily reduced by excluding landmarks that are difficult to digitize. The automated method we tested had low levels of error, if used in combination with a procedure for dimensionality reduction. Estimates of phylogenetic signal can be more affected by measurement error than by phylogenetic uncertainty. Our results generally highlight the importance of landmark choice and the usefulness of estimating measurement error. Further, measurement error may limit comparisons of estimates of phylogenetic signal across studies if these have been performed using different devices or by different operators. Finally, we also show how widely held assumptions do not always hold true
Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew
2017-11-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Errors due to random noise in velocity measurement using incoherent-scatter radar
Directory of Open Access Journals (Sweden)
P. J. S. Williams
Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of T_{e}/T_{i} from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.
Errors due to random noise in velocity measurement using incoherent-scatter radar
Directory of Open Access Journals (Sweden)
P. J. S. Williams
1996-12-01
Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.
Error Sources in the ETA Energy Analyzer Measurement
Energy Technology Data Exchange (ETDEWEB)
Nexsen, W E
2004-12-13
At present the ETA beam energy as measured by the ETA energy analyzer and the DARHT spectrometer differ by {approx}12%. This discrepancy is due to two sources, an overestimate of the effective length of the ETA energy analyzer bending-field, and data reduction methods that are not valid. The discrepancy can be eliminated if we return to the original process of measuring the angular deflection of the beam and use a value of 43.2cm for the effective length of the axial field profile.
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements.
Sedlak, Steffen M; Bruetzel, Linda K; Lipfert, Jan
2017-04-01
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), where I(q) is the scattering intensity as a function of the momentum transfer q; k and const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.
From Measurements Errors to a New Strain Gauge Design
DEFF Research Database (Denmark)
Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco
2015-01-01
Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...
Comparing objective and subjective error measures for color constancy
Lucassen, M.P.; Gijsenij, A.; Gevers, T.
2008-01-01
We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color
Fratini, G.; McDermitt, D. K.; Papale, D.
2013-08-01
Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (licor.com/eddypro"target="_blank">www.licor.com/eddypro).
The Effect of Maternal Drug Use on Birth Weight: Measurement Error in Binary Variables
Robert Kaestner; Theodore Joyce; Hassan Wehbeh
1996-01-01
This paper develops a method to correct for non-random measurement error in a binary indicator of illicit drugs. Our results suggest that estimates of the effect of self reported prenatal drug use on birth weight are biased upwards by measurement error -- a finding contrary to predictions of a model of random measurement error. We show that more accurate estimates of the true effect of drug use on birth weight can be obtained by using the predicted probability of falsely reporting drug use. T...
Measurement error in income and schooling, and the bias of linear estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Measurement error in income and schooling, and the bias for linear estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Quantification and handling of sampling errors in instrumental measurements: a case study
DEFF Research Database (Denmark)
Andersen, Charlotte Møller; Bro, R.
2004-01-01
Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serio...... on the predictions, the approach seems to provide more accurate predictions than the naive approach. Predictions of water content of fish fillets from low-field NMR relaxations are used as examples to show the applicability of the methods. (C) 2004 Elsevier B.V. All rights reserved....
Directory of Open Access Journals (Sweden)
P. Zimourtopoulos
2007-06-01
Full Text Available The objective was to study uncertainty in antenna input impedance resulting from full one-port Vector Network Analyzer (VNA measurements. The VNA process equation in the reflection coefficient ÃÂ of a load, its measurement m and three errors Es, determinable from three standard loads and their measurements, was considered. Differentials were selected to represent measurement inaccuracies and load uncertainties (Differential Errors. The differential operator was applied on the process equation and the total differential error dÃÂ for any unknown load (Device Under Test DUT was expressed in terms of dEs and dm, without any simplification. Consequently, the differential error of input impedance Z -or any other physical quantity differentiably dependent on ÃÂ- is expressible. Furthermore, to express precisely a comparison relation between complex differential errors, the geometric Differential Error Region and its Differential Error Intervals were defined. Practical results are presented for an indoor UHF ground-plane antenna in contrast with a common 50 ÃŽÂ© DC resistor inside an aluminum box. These two built, unshielded and shielded, DUTs were tested against frequency under different system configurations and measurement considerations. Intermediate results for Es and dEs characterize the measurement system itself. A number of calculations and illustrations demonstrate the application of the method.
Helle, Samuli
2017-11-11
Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Patello, G.K.; Wiemers, K.D.; Bell, R.D.; Smith, H.D.; Williford, R.E.; Clemmer, R.G.
1995-03-01
The High-Level Waste Vitrification Program is developing technology for the Department of Energy to immobilize high-level and transuranic wastes as glass for permanent disposal. Pacific Northwest Laboratory (PNL) is conducting laboratory-scale melter feed preparation studies using a HWVP simulated waste slurry, Neutralized Current Acid Waste (NCAW). A FY 1993 laboratory-scale study focused on the effects of noble metals (Pd, Rh, and Ru) on feed preparation offgas generation and NH{sub 3} production. The noble metals catalyze H{sub 2} and NH{sub 3} production, which leads to safety concerns. The information gained from this study is intended to be used for technology development in pilot scale testing and design of the Hanford High-Level Waste Vitrification Facility. Six laboratory-scale feed preparation tests were performed as part of the FY 1993 testing activities using nonradioactive NCAW simulant. Tests were performed with 10%, 25%, 50% of nominal noble metals content. Also tested were 25% of the nominal Rh and a repeat of 25% nominal noble metals. The results of the test activities are described. 6 refs., 28 figs., 12 tabs.
Measurement error of surface-mounted fiber Bragg grating temperature sensor.
Yi, Liu; Zude, Zhou; Erlong, Zhang; Jun, Zhang; Yuegang, Tan; Mingyao, Liu
2014-06-01
Fiber Bragg grating (FBG) sensors are extensively used to measure surface temperatures. However, the temperature gradient effect of a surface-mounted FBG sensor is often overlooked. A surface-type temperature standard setup was prepared in this study to investigate the measurement errors of FBG temperature sensors. Experimental results show that the measurement error of a bare fiber sensor has an obvious linear relationship with surface temperature, with the largest error achieved at 8.1 °C. Sensors packaged with heat conduction grease generate smaller measurement errors than do bare FBG sensors and commercial thermal resistors. Thus, high-quality packaged methods and proper modes of fixation can effectively improve the accuracy of FBG sensors in measuring surface temperatures.
Intrinsic measurement errors for the speed of light in vacuum
Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.
2017-09-01
The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.
Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements
Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.
2012-12-01
This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.
Bateson, Thomas F; Wright, J Michael
2010-08-01
Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor.
Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients
DEFF Research Database (Denmark)
Müller, Monika; Biurrun Manresa, José; Limacher, Andreas
2017-01-01
clinical setting. METHODS: We calculated intraclass correlation coefficients and performed a Bland-Altman analysis. RESULTS: Intraclass correlation coefficients were all clearly greater than 0.75, and Bland-Altman analysis showed minute systematic errors with small point estimates and narrow 95% confidence......BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...... measurement error and number of records. We determined the measurement error of a single versus the mean of 3 records of pressure pain detection threshold (PPDT), electrical pain detection threshold (EPDT), and nociceptive withdrawal reflex threshold (NWRT) in 429 chronic pain patients recruited in a routine...
Katch, Frank I.; Katch, Victor L.
1980-01-01
Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)
Hofbauer, E.; Rascher, R.; Friedke, F.; Kometer, R.
2017-06-01
The basic physical measurement principle in DaOS is the vignettation of a quasi-parallel light beam emitted by an expanded light source in auto collimation arrangement. The beam is reflected by the surface under test, using invariant deflection by a moving and scanning pentaprism. Thereby nearly any curvature of the specimen is measurable. Resolution, systematic errors and random errors will be shown and explicitly discussed for the profile determination error. Measurements for a "plano-double-sombrero" device will be analyzed and reconstructed to find out the limit of resolution and errors of the reconstruction model and algorithms. These measurements are compared critically to reference results that are recorded by interferometry and Deflectometric Flatness Reference (DFR) method using a scanning penta device.
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy
Hoenk, M. E.
1994-01-01
Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.
Statistical analysis with measurement error or misclassification strategy, method and application
Yi, Grace Y
2017-01-01
This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...
Image pre-filtering for measurement error reduction in digital image correlation
Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing
2015-02-01
In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random
Directory of Open Access Journals (Sweden)
Tao Li
2016-03-01
Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
The impact of measurement errors in the identification of regulatory networks
Directory of Open Access Journals (Sweden)
Sato João R
2009-12-01
Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-06-01
Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J
2017-11-01
Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.
Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements
Directory of Open Access Journals (Sweden)
Simón Ruiz
2002-12-01
Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.
Directory of Open Access Journals (Sweden)
Hayek Lee-Ann C.
2005-01-01
Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.
Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.
Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R
2015-01-02
Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
Biggs, Adam T
2017-07-01
Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.
Measurements of stem diameter: implications for individual- and stand-level errors.
Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D
2017-08-01
Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when
Measurement error of global rainbow technique: The effect of recording parameters
Wu, Xue-cheng; Li, Can; Jiang, Hao-yu; Cao, Jian-zheng; Chen, Ling-hong; Gréhan, Gerard; Cen, Ke-fa
2017-11-01
Rainbow refractometry can measure refractive index and size of spray droplets simultaneously. Recording parameters of global rainbow imaging system, such as recording distance and scattering angle recording range, play a vital role in in-situ high accuracy measurement. In the paper, a theoretical and experimental investigation on the effect of recording parameters on measurement error of global rainbow technique was carried out for the first time. The relation of the two recording parameters, and the monochromatic aberrations in global rainbow imaging system were analyzed. In the framework of Lorenz-Mie theory and modified Nussenzveig theory with correction coefficients, measurement error curves of refractive index and size of the droplets caused by aberrations for different recording parameters were simulated. The simulated results showed that measurement error increased with RMS radius of diffuse spot; a long recording distance and a large scattering angle recording range both caused a larger diffuse spot; recording parameters were indicated to have a great effect on refractive index measurement error, but have little effect on measurement of droplet size. A sharp rise in spot radius at large recording parameters was mainly due to spherical aberration and coma. To confirm some of the conclusions, an experiment was conducted. The experimental results showed that the refractive index measurement error was as high as 1 . 3 × 10-3 for a recording distance of 31 cm. In the case, recording parameters are suggested to be set to as small a value as possible under the same optical elements.
Directory of Open Access Journals (Sweden)
Xu Zhang
2015-03-01
Full Text Available This article proposes a novel method for identifying the motion errors (mainly straightness error and angular error of a linear slide, which is based on the laser interferometry technique integrated with the shifting method. First, the straightness error of a linear slide incorporated with angular error (pitch error in the vertical direction and yaw error in the horizontal direction is schematically explained. Then, a laser interferometry–based system is constructed to measure the motion errors of a linear slide, and an algorithm of error separation technique for extracting the straightness error, angular error, and tilt angle error caused by the motion of the reflector is developed. In the proposed method, the reflector is mounted on the slide moving along the guideway. The light-phase variation of two interfering laser beams can identify the lateral translation error of the slide. The differential outputs sampled with shifting initial point at the same datum line are applied to evaluate the angular error of the slide. Furthermore, the yaw error of the slide is measured by a laser interferometer in laboratory environment and compared with the evaluated values. Experimental results demonstrate that the proposed method possesses the advantages of reducing the effects caused by the assembly error and the tilt angle errors caused by movement of the reflector, adapting to long- or short-range measurement, and operating the measurement experiment conveniently and easily.
Experimental validation of error in temperature measurements in thin walled ductile iron castings
DEFF Research Database (Denmark)
Pedersen, Karl Martin; Tiedje, Niels Skat
2007-01-01
An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...... to a level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained...
Directory of Open Access Journals (Sweden)
Noureddine Barka
2016-01-01
Full Text Available Error compensation techniques have been widely applied to improve multiaxis machine accuracy. However, due to the lack of reliable instrumentation for direct and overall measurements, all the compensation methods are based on offline measurements of each error component separately. The results of these measurements are static in nature and can only reflect the conditions at the moment of measurement. These results are not representative under real working conditions because of disturbances from load deformations, thermal distortions, and dynamic perturbations. This present approach involves the development of a new measurement system capable of dynamically evaluating the errors according to the six degrees of freedom. The developed system allows the generation of useful data that cover all machine states regardless of the operating conditions. The obtained measurements can be used to evaluate the performance of the machine, calibration, and real time compensation of errors. This system is able to perform dynamic measurements reflecting the global accuracy of the machine tool without a long and expensive analysis of various error sources contribution. Finally, the system exhibits compatible metrological characteristics with high precision applications.
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...... in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders Udgivelsesdato: 2009/3/30...
Directory of Open Access Journals (Sweden)
Ivan M Roitt
2010-01-01
Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes. However, there are a number of possible sources of measurement error that must be considered. The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample. Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up. It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber. Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.
Linear mixed models for replication data to efficiently allow for covariate measurement error.
Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris
2009-11-10
It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.
Uncertainty in Measurement and Total Error: Tools for Coping with Diagnostic Uncertainty.
Theodorsson, Elvar
2017-03-01
Laboratory medicine decreases diagnostic uncertainty, but is influenced by factors causing uncertainties. Error and uncertainty methods are commonly seen as incompatible in laboratory medicine. New versions of the Guide to the Expression of Uncertainty in Measurement and International Vocabulary of Metrology will incorporate both uncertainty and error methods, which will assist collaboration between metrology and laboratories. Law of propagation of uncertainty and bayesian statistics are theoretically preferable to frequentist statistical methods in diagnostic medicine. However, frequentist statistics are better known and more widely practiced. Error and uncertainty methods should both be recognized as legitimate for calculating diagnostic uncertainty. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)
2016-08-01
The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.
Merker, Claire; Ament, Felix; Clemens, Marco
2017-04-01
The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.
Sarkar, Abhra
2014-10-02
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Linear and nonlinear magnetic error measurements using action and phase jump analysis
Directory of Open Access Journals (Sweden)
Javier F. Cardona
2009-01-01
Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.
MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION
Directory of Open Access Journals (Sweden)
Ashit Chakraborty
2013-09-01
Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.
[Measurement Error Analysis and Calibration Technique of NTC - Based Body Temperature Sensor].
Deng, Chi; Hu, Wei; Diao, Shengxi; Lin, Fujiang; Qian, Dahong
2015-11-01
A NTC thermistor-based wearable body temperature sensor was designed. This paper described the design principles and realization method of the NTC-based body temperature sensor. In this paper the temperature measurement error sources of the body temperature sensor were analyzed in detail. The automatic measurement and calibration method of ADC error was given. The results showed that the measurement accuracy of calibrated body temperature sensor is better than ± 0.04 degrees C. The temperature sensor has high accuracy, small size and low power consumption advantages.
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2003-01-01
Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi
2018-02-14
This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.
Utilizing measure-based feedback in control-mastery theory: A clinical error.
Snyder, John; Aafjes-van Doorn, Katie
2016-09-01
Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Parnchit Wattanasaruch
2012-09-01
Full Text Available The analyses of clinical and epidemiologic studies are often based on some kind of regression analysis, mainly linearregression and logistic models. These analyses are often affected by the fact that one or more of the predictors are measuredwith error. The error in the predictors is also known to bias the estimates and hypothesis testing results. One of the proceduresfrequently used to handle such problem in order to reduce the measurement errors is the method of regression calibration forpredicting the continuous covariate. The idea is to predict the true value of error-prone predictor from the observed data, thento use the predicted value for the analyses. In this research we develop four calibration procedures, namely probit, complementary log-log, logit, and logistic calibration procedures for corrections of the measurement error and/or the misclassification error to predict the true values for the misclassification explanatory variables used in generalized linear models. Theprocesses give the predicted true values of a binary explanatory variable using the calibration techniques then use thesepredicted values to fit the three models such that the probit, the complementary log-log, and the logit models under the binaryresponse. All of which are investigated by considering the mean square error (MSE in 1,000 simulation studies in each caseof the known parameters and conditions. The results show that the proposed working calibration techniques that can performadequately well are the probit, logistic, and logit calibration procedures. Both the probit calibration procedure and the probitmodel are superior to the logistic and logit calibrations due to the smallest MSE. Furthermore, the probit model-parameterestimates also improve the effects of the misclassification explanatory variable. Only the complementary log-log model andits calibration technique are appropriate when measurement error is moderate and sample size is high.
Quantitative shearography: error reduction by using more than three measurement channels
Energy Technology Data Exchange (ETDEWEB)
Charrett, Tom O. H.; Francis, Daniel; Tatam, Ralph P.
2011-01-10
Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.
Quantitative shearography: error reduction by using more than three measurement channels.
Charrett, Tom O H; Francis, Daniel; Tatam, Ralph P
2011-01-10
Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for quantitative strain measurement. This transformation, conventionally performed using three measurement channels, amplifies any errors present in the measurement. This paper investigates the use of additional measurement channels using the results of a computer model and an experimental shearography system. Results are presented showing that the addition of a fourth channel can reduce the errors in the computed orthogonal components by up to 33% and that, by using 10 channels, reductions of around 45% should be possible.
Error analysis and data forecast in the centre of gravity measurement system for small tractors
Jiang, J.D.; Hoogmoed, W.B.; Yingdi, Z.; Xian, Z.
2011-01-01
A novel centre of gravity measurement system for small tractors with the principle of the three-point reaction is presented. According to the prototype of a small tractor gravity centre test platform, a mathematic multi-body dynamics prototype was built to analyze the measurement error in the centre
Can i just check...? Effects of edit check questions on measurement error and survey estimates
Lugtig, Peter; Jäckle, Annette
2014-01-01
Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to
Effects of cosine error in irradiance measurements from field ocean color radiometers.
Zibordi, Giuseppe; Bulgarelli, Barbara
2007-08-01
The cosine error of in situ seven-channel radiometers designed to measure the in-air downward irradiance for ocean color applications was investigated in the 412-683 nm spectral range with a sample of three instruments. The interchannel variability of cosine errors showed values generally lower than +/-3% below 50 degrees incidence angle with extreme values of approximately 4-20% (absolute) at 50-80 degrees for the channels at 412 and 443 nm. The intrachannel variability, estimated from the standard deviation of the cosine errors of different sensors for each center wavelength, displayed values generally lower than 2% for incidence angles up to 50 degrees and occasionally increasing up to 6% at 80 degrees. Simulations of total downward irradiance measurements, accounting for average angular responses of the investigated radiometers, were made with an accurate radiative transfer code. The estimated errors showed a significant dependence on wavelength, sun zenith, and aerosol optical thickness. For a clear sky maritime atmosphere, these errors displayed values spectrally varying and generally within +/-3%, with extreme values of approximately 4-10% (absolute) at 40-80 degrees sun zenith for the channels at 412 and 443 nm. Schemes for minimizing the cosine errors have also been proposed and discussed.
Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS
Sohn, S. J.
1970-01-01
Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.
DEFF Research Database (Denmark)
Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard
2015-01-01
Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...... deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors....
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Zhang, Haoliang; Yang, Jun; Li, Chuang; Yu, Zhangjun; Yang, Zhe; Yuan, Yonggui; Peng, Feng; Li, Hanyang; Hou, Changbo; Zhang, Jianzhong; Yuan, Libo; Xu, Jianming; Zhang, Chao; Yu, Quanfu
2017-08-20
Measurement error for the polarization extinction ratio (PER) of a multifunctional integrated optic chip (MFIOC) utilizing white light interferometry was analyzed. Three influence factors derived from the all-fiber device (or optical circuit) under test were demonstrated to be the main error sources, including: 1) the axis-alignment angle (AA) of the connection point between the extended polarization-maintaining fiber (PMF) and the chip PMF pigtail; 2) the oriented angle (OA) of the linear polarizer; and 3) the birefringence dispersion of PMF and the MFIOC chip. Theoretical calculations and experimental results indicated that by controlling the AA range within 0°±5°, the OA range within 45°±2° and combining with dispersion compensation process, the maximal PER measurement error can be limited to under 1.4 dB, with the 3σ uncertainty of 0.3 dB. The variations of birefringence dispersion effect versus PMF length were also discussed to further confirm the validity of dispersion compensation. A MFIOC with the PER of ∼50 dB was experimentally tested, and the total measurement error was calculated to be ∼0.7 dB, which proved the effectiveness of the proposed error reduction methods. We believe that these methods are able to facilitate high-accuracy PER measurement.
Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng
2016-10-01
The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.
DEFF Research Database (Denmark)
Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren
1997-01-01
errors can be largely compensated for by an absolute calibration of the transducers and inverse filtering that results in very small residual errors. Experimental results of this study indicate that these uncertainties will be in the order of one percent with respect to amplitude and two tenth......Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias...... of a degree for the phase. This implies that input power at a single point can be measured to within one dB in practical structures which possesses some damping. The uncertainty is increased, however, when sums of measured power contributions from more sources are to be minimised, as is the case in active...
A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox
Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang
2017-12-01
Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.
Directory of Open Access Journals (Sweden)
Wiktor Harmatys
2017-12-01
Full Text Available The five-axis measuring systems are one of the most modern inventions in coordinate measuring technique. They are capable of performing measurements using only the rotary pairs present in their kinematic structure. This possibility is very useful because it may cause significant reduction of total measurement time and costs. However, it was noted that high values of measured workpiece's form errors may cause significant reduction of five-axis measuring system accuracy. The investigation on the relation between these two parameters was conducted in this paper and possible reasons of decrease in measurement accuracy was discussed in example of measurements of workpieces with form errors ranging from 0,5 to 1,7 millimetre.
On the impact of covariate measurement error on spatial regression modelling.
Huque, Md Hamidul; Bondell, Howard; Ryan, Louise
2014-12-01
Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD).
Influence of video compression on the measurement error of the television system
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
Energy Technology Data Exchange (ETDEWEB)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan (LMU)
2017-03-29
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ^{2}(
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
Directory of Open Access Journals (Sweden)
Xiaofang Kong
2018-01-01
Full Text Available Inclinometer assembly error is one of the key factors affecting the measurement accuracy of photoelectric measurement systems. In order to solve the problem of the lack of complete attitude information in the measurement system, this paper proposes a new inclinometer assembly error calibration and horizontal image correction method utilizing plumb lines in the scenario. Based on the principle that the plumb line in the scenario should be a vertical line on the image plane when the camera is placed horizontally in the photoelectric system, the direction cosine matrix between the geodetic coordinate system and the inclinometer coordinate system is calculated firstly by three-dimensional coordinate transformation. Then, the homography matrix required for horizontal image correction is obtained, along with the constraint equation satisfying the inclinometer-camera system requirements. Finally, the assembly error of the inclinometer is calibrated by the optimization function. Experimental results show that the inclinometer assembly error can be calibrated only by using the inclination angle information in conjunction with plumb lines in the scenario. Perturbation simulation and practical experiments using MATLAB indicate the feasibility of the proposed method. The inclined image can be horizontally corrected by the homography matrix obtained during the calculation of the inclinometer assembly error, as well.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
Energy Technology Data Exchange (ETDEWEB)
DeSalvo, Riccardo, E-mail: Riccardo.desalvo@gmail.com [California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330-8332 (United States); University of Sannio, Corso Garibaldi 107, Benevento 82100 (Italy)
2015-06-26
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G.
Directory of Open Access Journals (Sweden)
Shi Qiang Liu
2016-01-01
Full Text Available Errors compensation of micromachined-inertial-measurement-units (MIMU is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3 possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively.
Liu, Shi Qiang; Zhu, Rong
2016-01-29
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
A New Algorithm of Compensation of the Time Interval Error GPS-Based Measurements
Directory of Open Access Journals (Sweden)
Jonny Paul ZAVALA DE PAZ
2010-01-01
Full Text Available In this paper we present a new algorithm of compensation of the time interval error (TIE applying an unbiased p-step predictive finite impulse response (FIR filter at the signal of the receiver Global Positioning System (GPS-based measurements. The practical use of the system GPS involves various inherent problems of the signal. Two of the most important problems are the TIE and the instantaneous loss of the signal of the GPS by a small interval of time, called "holdover". The error holdover is a problem that at present does not possess solution and the systems that present this type of error produce lines of erroneous synchronization in the signal of the GPS. Basic holdover algorithms are discussed along with their most critical properties. Efficiency of the predictive filter in holdover is demonstrated in applications to GPS-based measurements of the TIE.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.
Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal
2016-05-01
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad
Energy Technology Data Exchange (ETDEWEB)
Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)
2016-05-15
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.
Interpolation techniques to reduce error in measurement of toe clearance during obstacle avoidance.
Heijnen, Michel J H; Muir, Brittney C; Rietdyk, Shirley
2012-01-03
Foot and toe clearance (TC) are used regularly to describe locomotor control for both clinical and basic research. However, accuracy of TC during obstacle crossing can be compromised by typical sample frequencies, which do not capture the frame when the foot is over the obstacle due to high limb velocities. The purpose of this study was to decrease the error of TC measures by increasing the spatial resolution of the toe trajectory with interpolation. Five young subjects stepped over an obstacle in the middle of an 8 m walkway. Position data were captured at 600 Hz as a gold standard signal (GS-600-Hz). The GS-600-Hz signal was downsampled to 60 Hz (DS-60-Hz). The DS-60-Hz was then interpolated by either upsampling or an algorithm. Error was calculated as the absolute difference in TC between GS-600-Hz and each of the remaining signals, for both the leading limb and the trailing limb. All interpolation methods reduced the TC error to a similar extent. Interpolation reduced the median error of trail TC from 5.4 to 1.1 mm; the maximum error was reduced from 23.4 to 4.2 mm (16.6-3.8%). The median lead TC error improved from 1.6 to 0.5 mm, and the maximum error improved from 9.1 to 1.8 mm (5.3-0.9%). Therefore, interpolating a 60 Hz signal is a valid technique to decrease the error of TC during obstacle crossing. Published by Elsevier Ltd.
Wide-aperture laser beam measurement using transmission diffuser: errors modeling
Matsak, Ivan S.
2015-06-01
Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.
[Errors in medicine. Causes, impact and improvement measures to improve patient safety].
Waeschle, R M; Bauer, M; Schmidt, C E
2015-09-01
The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing
Jin, Tao; Ji, Hudong; Hou, Wenmei; Le, Yanfen; Shen, Lu
2017-01-20
This paper presents an enhanced differential plane mirror interferometer with high resolution for measuring straightness. Two sets of space symmetrical beams are used to travel through the measurement and reference arms of the straightness interferometer, which contains three specific optical devices: a Koster prism, a wedge prism assembly, and a wedge mirror assembly. Changes in the optical path in the interferometer arms caused by straightness are differential and converted into phase shift through a particular interferometer system. The interferometric beams have a completely common path and space symmetrical measurement structure. The crosstalk of the Abbe error caused by pitch, yaw, and roll angle is avoided. The dead path error is minimized, which greatly enhances the stability and accuracy of the measurement. A measurement resolution of 17.5 nm is achieved. The experimental results fit well with the theoretical analysis.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method.
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument.
Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis
Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara
2014-01-01
This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm.h-1 to 250 mm.h-1) and three di...
DEFF Research Database (Denmark)
Picchini, Umberto; Forman, Julie Lyng
2016-01-01
a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...
CSIR Research Space (South Africa)
Kruger, OA
2000-01-01
Full Text Available , eccentricity and pyramidal errors of the measuring faces. Deviations in the flatness of angle surfaces have been held responsible for the lack of agreement in angle comparisons. An investigation has been carried out using a small-angle generator...
Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure
Padilla, Miguel A.; Veprinsky, Anna
2012-01-01
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Quantum Non-Demolition Singleshot Parity Measurements for a Proposed Quantum Error Correction Scheme
Petrenko, Andrei; Sun, Luyan; Leghtas, Zaki; Vlastakis, Brian; Kirchmair, Gerhard; Sliwa, Katrina; Narla, Anirudh; Hatridge, Michael; Shankar, Shyam; Blumoff, Jacob; Frunzio, Luigi; Mirrahimi, Mazyar; Devoret, Michel; Schoelkopf, Robert
2014-03-01
In order to be effective, a quantum error correction scheme(QEC) requires measurements of an error syndrome to be Quantum Non-Demolition (QND) and fast compared to the rate at which errors occur. Employing a superconducting circuit QED architecture, the parity of a superposition of coherent states in a cavity, or cat states, is the error syndrome for a recently proposed QEC scheme. We demonstrate the tracking of parity of cat states in a cavity and observe individual jumps of party in real-time with singleshot measurements that are much faster than the lifetime of the cavity. The projective nature of these measurements is evident when inspecting individual singleshot traces, yet when averaging the traces as an ensemble the average parity decays as predicted for a coherent state. We find our protocol to be 99.8% QND per measurement, and our sensitivity to parity jumps to be very high at 96% for an average photon number n = 1 in the cavity (85% for n = 4). Such levels of performance can already increase the lifetime of a quantum bit of information, and thereby present a promising step towards realizing a viable QEC scheme.
Kooij, Y.E. van; Fink, A.; Nijhuis-Van der Sanden, M.W.; Speksnijder, C.M.
2017-01-01
STUDY DESIGN: Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. METHODS: Databases were searched for articles with key words "hand,"
van Kooij, Yara E.; Fink, Alexandra; Nijhuis-van der Sanden, Maria W.; Speksnijder, Caroline M.|info:eu-repo/dai/nl/304821535
2017-01-01
Study Design: Systematic review. Purpose of the Study: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Methods: Databases were searched for articles with key words "hand,"
Error Bounds Due to Random Noise in Cylindrical Near-Field Measurements
Romeu Robert, Jordi; Jofre Roca, Lluís
1991-01-01
The far field errors due to near field random noise are statistically bounded when performing cylindrical near to far field transform. In this communication, the far field noise variance it is expressed as a function of the measurement parameters and the near field noise variance. Peer Reviewed
Tan Sisman, Gulcin; Aksu, Meral
2016-01-01
The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…
Zeka, Ariana; Schwartz, Joel
2004-12-01
Misclassification of exposure usually leads to biased estimates of exposure-response associations. This is particularly an issue in cases with multiple correlated exposures, where the direction of bias is uncertain. It is necessary to address this problem when considering associations with important public health implications such as the one between mortality and air pollution, because biased exposure effects can result in biased risk assessments. The National Morbidity and Mortality Air Pollution Study (NMMAPS) recently reported results from an assessment of multiple pollutants and daily mortality in 90 U.S. cities. That study assessed the independent associations of the selected pollutants with daily mortality in two-pollutant models. Excess mortality was associated with particulate matter of aerodynamic diameter less than or equal to 10 microm/m3 (PM10), but not with other pollutants, in these two pollutant models. The extent of bias due to measurement error in these reported results is unclear. Schwartz and Coull recently proposed a method that deals with multiple exposures and, under certain conditions, is resistant to measurement error. We applied this method to reanalyze the data from NMMAPS. For PM10, we found results similar to those reported previously from NMMAPS (0.24% increase in deaths per 10-microg/m3) increase in PM10). In addition, we report an important effect of carbon monoxide that had not been observed previously.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
A new method to reduce truncation errors in partial spherical near-field measurements
DEFF Research Database (Denmark)
Cano-Facila, F J; Pivnenko, Sergey
2011-01-01
angular sector as well as a truncation error is present in the calculated far-field pattern within this sector. The method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward......A new and effective method for reduction of truncation errors in partial spherical near-field (SNF) measurements is proposed. The method is useful when measuring electrically large antennas, where the measurement time with the classical SNF technique is prohibitively long and an acquisition over...... hemisphere. To verify the effectiveness of the method, several examples are presented using both simulated and measured truncated near-field data....
Influenza infection rates, measurement errors and the interpretation of paired serology.
Directory of Open Access Journals (Sweden)
Simon Cauchemez
Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.
Gilchrist, Michael A; Shah, Premal; Zaretzki, Russell
2009-12-01
Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy.
Bradshaw, Corey J A; Sims, David W; Hays, Graeme C
2007-03-01
Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy mu, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of > or = 10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD > or = 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on
Harshman, Jordan; Yezierski, Ellen
2016-01-01
Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…
Directory of Open Access Journals (Sweden)
Guanbin Gao
2017-01-01
Full Text Available Articulated arm coordinate measuring machine (AACMM is a specific robotic structural instrument, which uses D-H method for the purpose of kinematic modeling and error compensation. However, it is difficult for the existing error compensation models to describe various factors, which affects the accuracy of AACMM. In this paper, a modeling and error compensation method for AACMM is proposed based on BP Neural Networks. According to the available measurements, the poses of the AACMM are used as the input, and the coordinates of the probe are used as the output of neural network. To avoid tedious training and improve the training efficiency and prediction accuracy, a data acquisition strategy is developed according to the actual measurement behavior in the joint space. A neural network model is proposed and analyzed by using the data generated via Monte-Carlo method in simulations. The structure and parameter settings of neural network are optimized to improve the prediction accuracy and training speed. Experimental studies have been conducted to verify the proposed algorithm with neural network compensation, which shows that 97% error of the AACMM can be eliminated after compensation. These experimental results have revealed the effectiveness of the proposed modeling and compensation method for AACMM.
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells
Energy Technology Data Exchange (ETDEWEB)
Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.
2014-03-01
This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.
Hao, Qun; Li, Tengfei; Hu, Yao
2018-01-01
Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.
Out-of-squareness measurement on ultra-precision machine based on the error separation
Lai, Tao; Liu, Junfeng; Chen, Shanyong; Guan, Chaoliang; Tie, Guipeng; Liao, Quan
2017-06-01
Traditional methods of measuring out-of-squareness of ultra-precision motion stage have many limitations, especially the errors caused by inaccuracy of standard specimens, such as bare L-square and optical pentaprism. And generally, the accurate of out-of-squareness measurement is lower than the accurate of interior angles of standard specimens. Based on the error separation, this paper presents a novel method of out-of-squareness measurement with a polygon artifact. The angles bounded with the guideways and the edges of polygon artifact are measured, and the out-of-squareness distraction is achieved by the principle that the sum of internal the angles of a convex polygon artifact is (n-2)π. A out-of-squareness metrical experiment is carried out on the profilometer by using an optical square brick with the out-of-squareness of interior angles at about 1140.2 arcsec. The results show that the measurement accuracy of three out-of-squareness of the profilometer is not affected by the internal angles. The measurementwith the method can be applied to measure the machine error more accurate and calibrate the out-of-squareness of machine.
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
van Kooij, Yara E; Fink, Alexandra; Nijhuis-van der Sanden, Maria W; Speksnijder, Caroline M
Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Databases were searched for articles with key words "hand," "goniometry," "reliability," and derivatives of these terms. Assessment of the methodological quality was carried out using the Consensus-Based Standards for the Selection of Health Measurement Instruments checklist. Two independent reviewers performed a best evidence synthesis based on criteria proposed by Terwee et al (2007). Fifteen articles were included. One article was of fair methodological quality, and 14 articles were of poor methodological quality. An acceptable level for reliability (intraclass correlation coefficient > 0.70 or Pearson's correlation > 0.80) was reported in 1 study of fair methodological quality and in 8 articles of low methodological quality. Because the minimal important change was not calculated in the articles, there was an unknown level of evidence for the measurement error. Further research with adequate sample sizes should focus on reference outcomes for different patient groups. For valid therapy evaluation, it is important to know if the change in range of motion reflects a real change of the patient or if this is due to the measurement error of the goniometer. Until now, there is insufficient evidence to establish this cut-off point (the smallest detectable change). Following the Consensus-Based Standards for the Selection of Health Measurement Instruments criteria, there was limited level of evidence for an acceptable reliability in the dorsal measurement method and unknown level of evidence for the measurement error. 2a. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Topping, David J.; Wright, Scott A.
2016-05-04
these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.
Testing capability indices for one-sided processes with measurement errors
Directory of Open Access Journals (Sweden)
Grau D.
2013-01-01
Full Text Available In the manufacturing industry, many product characteristics are of one-sided tolerances. The process capability indices Cpu (u, v and Cpl (u, v can be used to measure process performance. Most research work related to capability indices assumes no gauge measurement errors. This assumption insufficiently reflects real situations even when advanced measuring instruments are used. In this paper we show that using a critical value without taking into account these errors, severely underestimates the α-risk which causes a less accurate testing capacity. In order to improve the results we suggest the use of an adjusted critical value, and we give a Maple program to get it. An example in a polymer granulates manufactory is presented to illustrate this approach.
Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors
Directory of Open Access Journals (Sweden)
Hong Zhao
2013-01-01
Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.
Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude
Liu, Bingyi; Feng, Changzhong; Liu, Zhishen
2014-11-01
For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.
Carroll, Raymond J.
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Zhi, Z.; Tan, J. B.; Huang, X. D.; Chen, F. F.
2006-10-01
In order to solve the contradiction between error detection, transmission rate and system resources in data transmission of ultra-precision measurement, a kind of algorithm for high-speed generating CRC code has been put forward in this paper. Theoretical formulae for calculating CRC code of 16-bit segmented data are obtained by derivation. On the basis of 16-bit segmented data formulae, Optimized algorithm for 32-bit segmented data CRC coding is obtained, which solve the contradiction between memory occupancy and coding speed. Data coding experiments are conducted triumphantly by using high-speed ARM embedded system. The results show that this method has features of high error detecting ability, high speed and saving system resources, which improve Real-time Performance and Reliability of the measurement data communication.
Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass
Directory of Open Access Journals (Sweden)
Dennis J. Dunning
2002-01-01
Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.
Zajkowski, Konrad
This paper presents an algorithm for solving N-equations of N-unknowns. This algorithm allows to determine the solution in a situation where coefficients Ai in equations are burdened with measurement errors. For some values of Ai (where i = 1,…, N), there is no inverse function of input equations. In this case, it is impossible to determine the solution of equations of classical methods.
Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks
Directory of Open Access Journals (Sweden)
Peter L. McCall
2001-01-01
Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.
Dagne, Getachew A.; Huang, Yangxin
2013-01-01
Common problems to many longitudinal HIV/AIDS, cancer, vaccine and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection (LOD) may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models which can account for a high proportion of censored data should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left-censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left-censoring, skewness and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. PMID:23553914
Degradation data analysis based on a generalized Wiener process subject to measurement error
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Regression calibration method for correcting measurement-error bias in nutritional epidemiology.
Spiegelman, D; McDermott, A; Rosner, B
1997-04-01
Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros.
Optics measurement algorithms and error analysis for the proton energy frontier
Langner, A
2015-01-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...
Optics measurement algorithms and error analysis for the proton energy frontier
Directory of Open Access Journals (Sweden)
A. Langner
2015-03-01
Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL
Energy Technology Data Exchange (ETDEWEB)
Hao, J.; Sheldon, E.
2009-08-14
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Energy Technology Data Exchange (ETDEWEB)
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.
Directory of Open Access Journals (Sweden)
Lisa O'Donoghue
Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.
Directory of Open Access Journals (Sweden)
U. Foelsche
2011-02-01
Full Text Available Atmospheric profiles retrieved from GNSS (Global Navigation Satellite System radio occultation (RO measurements are increasingly used to validate other measurement data. For this purpose it is important to be aware of the characteristics of RO measurements. RO data are frequently compared with vertical reference profiles, but the RO method does not provide vertical scans through the atmosphere. The average elevation angle of the tangent point trajectory (which would be 90° for a vertical scan is about 40° at altitudes above 70 km, decreasing to about 25° at 20 km and to less than 5° below 3 km. In an atmosphere with high horizontal variability we can thus expect noticeable representativeness errors if the retrieved profiles are compared with vertical reference profiles. We have performed an end-to-end simulation study using high-resolution analysis fields (T799L91 from the European Centre for Medium-Range Weather Forecasts (ECMWF to simulate a representative ensemble of RO profiles via high-precision 3-D ray tracing. Thereby we focused on the dependence of systematic and random errors on the measurement geometry, specifically on the incidence angle of the RO measurement rays with respect to the orbit plane of the receiving satellite, also termed azimuth angle, which determines the obliquity of RO profiles. We analyzed by how much errors are reduced if the reference profile is not taken vertical at the mean tangent point but along the retrieved tangent point trajectory (TPT of the RO profile. The exact TPT can only be determined by performing ray tracing, but our results confirm that the retrieved TPT – calculated from observed impact parameters – is a very good approximation to the "true" one. Systematic and random errors in RO data increase with increasing azimuth angle, less if the TPT is properly taken in to account, since the increasing obliquity of the RO profiles leads to an increasing sensitivity to departures from horizontal
Ryu, Gyeong Suk; Lee, Yu Jeung
2012-01-01
Patients use several types of devices to measure liquid medication. Using a criterion ranging from a 10% to 40% variation from a target 5 mL for a teaspoon dose, previous studies have found that a considerable proportion of patients or caregivers make errors when dosing liquid medication with measuring devices. To determine the rate and magnitude of liquid medication dose errors that occur with patient/caregiver use of various measuring devices in a community pharmacy. Liquid medication measurements by patients or caregivers were observed in a convenience sample of community pharmacy patrons in Korea during a 2-week period in March 2011. Participants included all patients or caregivers (N = 300) who came to the pharmacy to buy over-the-counter liquid medication or to have a liquid medication prescription filled during the study period. The participants were instructed by an investigator who was also a pharmacist to select their preferred measuring devices from 6 alternatives (etched-calibration dosing cup, printed-calibration dosing cup, dosing spoon, syringe, dispensing bottle, or spoon with a bottle adapter) and measure a 5 mL dose of Coben (chlorpheniramine maleate/phenylephrine HCl, Daewoo Pharm. Co., Ltd) syrup using the device of their choice. The investigator used an ISOLAB graduated cylinder (Germany, blue grad, 10 mL) to measure the amount of syrup dispensed by the study participants. Participant characteristics were recorded including gender, age, education level, and relationship to the person for whom the medication was intended. Of the 300 participants, 257 (85.7%) were female; 286 (95.3%) had at least a high school education; and 282 (94.0%) were caregivers (parent or grandparent) for the patient. The mean (SD) measured dose was 4.949 (0.378) mL for the 300 participants. In analysis of variance of the 6 measuring devices, the greatest difference from the 5 mL target was a mean 5.552 mL for 17 subjects who used the regular (etched) dosing cup and 4
Lou, Yingtian; Yan, Liping; Chen, Benyong; Zhang, Shihua
2017-03-20
A laser homodyne straightness interferometer with simultaneous measurement of six degrees of freedom motion errors is proposed for precision linear stage metrology. In this interferometer, the vertical straightness error and its position are measured by interference fringe counting, the yaw and pitch errors are obtained by measuring the spacing changes of interference fringe and the horizontal straightness and roll errors are determined by laser collimation. The merit of this interferometer is that four degrees of freedom motion errors are obtained by using laser interferometry with high accuracy. The optical configuration of the proposed interferometer is designed. The principle of the simultaneous measurement of six degrees of freedom errors including yaw, pitch, roll, two straightness errors and straightness error's position of measured linear stage is depicted in detail, and the compensation of crosstalk effects on straightness error and its position measurements is presented. At last, an experimental setup is constructed and several experiments are performed to demonstrate the feasibility of the proposed interferometer and the compensation method.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or
The effect of clock, media, and station location errors on Doppler measurement accuracy
Miller, J. K.
1993-01-01
Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.
Obesity increases precision errors in dual-energy X-ray absorptiometry measurements.
Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Fogelman, Ignac; Blake, Glen M
2012-01-01
The precision errors of dual-energy X-ray absorptiometry (DXA) measurements are important for monitoring osteoporosis. This study investigated the effect of body mass index (BMI) on precision errors for lumbar spine (LS), femoral neck (NOF), total hip (TH), and total body (TB) bone mineral density using the GE Lunar Prodigy. One hundred two women with BMIs ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the LS, left hip, and TB with repositioning between scans. Participants were divided into 3 groups based on their BMI and the percentage coefficient of variation (%CV) calculated for each group. The %CVs for the normal (obese (>30 kg/m(2)) (n=28) BMI groups, respectively, were LS BMD: 0.99%, 1.30%, and 1.68%; NOF BMD: 1.32%, 1.37%, and 2.00%; TH BMD: 0.85%, 0.88%, and 1.06%; TB BMD: 0.66%, 0.73%, and 0.91%. Statistically significant differences in precision error between the normal and obese groups were found for LS (p=0.0006), NOF (p=0.005), and TB BMD (p=0.025). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2012 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.
Cotton, Sue M; Crewther, David P; Crewther, Sheila G
2005-08-01
The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.
Zhao, Xiaxia; Mo, Rong; Chang, Zhiyong; Lu, Jin
2018-01-01
In phase measuring profilometry, the system gamma nonlinearity makes the captured fringe patterns non-sinusoidal, which causes the computed phase to exist a non-ignorable error and seriously affects the 3D reconstruction accuracy. Based on the detailed study of the existing gamma nonlinearity compensation and phase error reduction technique, a method based on low-pass frequency domain filtering is proposed. It mainly filters out higher than one-order harmonic components induced by the gamma nonlinearity in conditions of holding as much power as possible in the power spectrum, thus improves sinusoidal waveform of the fringe images. Compared to other compensation methods, the complex mathematic model is not needed in the proposed method. The simulation and experiments confirm that the higher-order harmonic components are significantly reduced, the phase precision can be effectively improved and a certain accuracy requirement can be reached.
McKenzie, D A
1991-06-01
Because many raters are generally involved in the implementation of a patient classification system, interrater reliability is always a concern in the development and use of such a system. In this article, a case example is used to demonstrate a prototype for identifying measurement error introduced at each step in the classification process (assessment, creating summary item responses, and use of these responses for categorization) and to illustrate how this identification may lead to error reduction strategies. The methods of analyses included percent agreement, Kappa, and visual inspection of contingency tables displaying interrater responses to assessment items, summary items, and the placement category. The extent to which raters followed instructions was analyzed by comparing their responses with computer-generated responses across the classification steps. In addition, raters were interviewed regarding their use of the system.
Indirect measurement of machine tool motion axis error with single laser tracker
Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun
2015-02-01
For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.
Sensitivity of the diamagnetic sensor measurements of ITER to error sources and their compensation
Energy Technology Data Exchange (ETDEWEB)
Fresa, R., E-mail: raffaele.fresa@unibas.it [CREATE/ENEA/Euratom Association, Scuola di Ingegneria, Università della Basilicata, Potenza (Italy); Albanese, R. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Arshad, S. [Fusion for Energy (F4E), Barcelona (Spain); Coccorese, V.; Magistris, M. de; Minucci, S.; Pironti, A.; Quercia, A.; Rubinacci, G. [CREATE/ENEA/Euratom Association, DIETI, Università di Napoli Federico II, Naples (Italy); Vayakis, G. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Villone, F. [CREATE/ENEA/Euratom Association, Università di Cassino, Cassino (Italy)
2015-11-15
Highlights: • In the paper we discuss the sensitivity analysis for the measurement system of diamagnetic flux in the ITER tokamak. • Some compensation formulas have been tested to compensate the manufacturing errors, both for the sources and the sensors. • An estimation of the poloidal beta has been carried out by estimating plasma's diamagnetism. - Abstract: The present paper is focused on the sensitivity analysis of the diamagnetic sensor measurements of ITER against several kinds of error sources, with the aim of compensating them for improving the accuracy in the evaluation of the energy confinement time and poloidal beta, via Shafranov formula. The virtual values of measurements at the diamagnetic sensors were simulated by the COMPFLUX code, a numerical code able to compute the field and flux values generated in a prescribed set of output points from massive conductors and generalized filamentary currents (with an arbitrary 3D shape and a negligible cross section) in the presence of magnetic materials. The major issue to face with has been to determine the possible deformations of sensors and electromagnetic sources. The analysis has been carried out considering the following cases: -deformed sensors and ideal EM (electromagnetic) sources; -ideal sensors and perturbed EM sources; -both sensors and EM sources perturbed. As regards the compensation, several formulas have been proposed, based on the measurements carried out by the compensation coils; they basically use the value of the flux density measured to compensate the effects of the poloidal eddy currents induced in the conducting structures surrounding the plasma. The static deviation due to sensor manufacturing and positioning errors has been evaluated, and most of the pollution of the diamagnetic flux has been compensated, meeting the prescribed specifications and tolerances.
Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.
1998-06-01
Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Inter-rater reliability and measurement error of sonographic muscle architecture assessments.
König, Niklas; Cassel, Michael; Intziegianni, Konstantina; Mayer, Frank
2014-05-01
Sonography of muscle architecture provides physicians and researchers with information about muscle function and muscle-related disorders. Inter-rater reliability is a crucial parameter in daily clinical routines. The aim of this study was to assess the inter-rater reliability of sonographic muscle architecture assessments and quantification of errors that arise from inconsistent probe positioning and image interpretation. The medial gastrocnemius muscle of 15 healthy participants was measured with sagittal B-mode ultrasound scans. The muscle thickness, fascicle length, superior pennation angle, and inferior pennation angle were assessed. The participants were examined by 2 investigators. A custom-made foam cast was used for standardized positioning of the probe. To analyze inter-rater reliability, the examinations of both raters were compared. The impact of probe positioning was assessed by comparison of foam cast and freehand scans. Error arising from picture interpretation was assessed by comparing the investigators' analyses of foam cast scans independently. Reliability was expressed as the intraclass correlation coefficient (ICC), inter-rater variability (IRV), Bland-Altman analysis (bias ± limits of agreement [LoA]), and standard error of measurement (SEM). Inter-rater reliability was good overall (ICC, 0.77-0.90; IRV, 9.0%-13.4%; bias ± LoA, 0.2 ± 0.2-1.7 ± 3.0). Superior and inferior pennation angles showed high systematic bias and LoA in all setups, ranging from 2.0° ± 2.2° to 3.4° ± 4.1°. The highest IRV was found for muscle thickness (13.4%). When the probe position was standardized, the SEM for muscle thickness decreased from 0.1 to 0.05 cm. Sonographic examination of muscle architecture of the medial gastrocnemius has good to high reliability. In contrast to pennation angle measurements, length measurements can be improved by standardization of the probe position.
Reduction of truncation errors in partial spherical near-field antenna measurements
DEFF Research Database (Denmark)
Pivnenko, Sergey; Cano Facila, Francisco J.
2010-01-01
In this report, a new and effective method for reduction of truncation errors in partial spherical near-field (SNF) antenna measurements is proposed. This method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the far......-field pattern calculated from a truncated SNF measurement up to the whole forward hemisphere. The method is useful when measuring electrically large antennas and the measurement over the whole sphere is very time consuming. Therefore, a solution is considered to take samples over a portion of the spherical...... surface and then to apply the above method to reconstruct the far-field pattern. The work described in this report was carried out within the external stay of Francisco J. Cano at the Technical University of Denmark (DTU) from September 6th to December 18th in 2010....
Measured and predicted root-mean-square errors in square and triangular antenna mesh facets
Fichter, W. B.
1989-01-01
Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.
Low-error and broadband microwave frequency measurement in a silicon chip
Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David
2015-01-01
Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.
Backward-gazing method for heliostats shape errors measurement and calibration
Coquand, Mathieu; Caliot, Cyril; Hénault, François
2017-06-01
The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B
2010-06-01
The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity.
MEASURING THE INFLUENCE OF TASK COMPLEXITY ON HUMAN ERROR PROBABILITY: AN EMPIRICAL EVALUATION
Directory of Open Access Journals (Sweden)
LUCA PODOFILLINI
2013-04-01
Full Text Available A key input for the assessment of Human Error Probabilities (HEPs with Human Reliability Analysis (HRA methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs. In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations, and evaluates its use to represent (objectively and quantitatively task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs for which empirical evidence on the HEPs (albeit with large uncertainty and influencing factors are available – from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e.g., “easy” vs. “somewhat difficult”, while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1 the definitions of the PSFs cover the influences important for HRA (i.e., influencing the error probability, and 2 the quantitative relationships among PSFs and error
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin
2015-11-02
A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.
Error field measurement, correction and heat flux balancing on Wendelstein 7-X
Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; Israeli, Ben; Wurden, Glen A.; Wenzel, Uwe; Andreeva, Tamara; Bozhenkov, Sergey; Biedermann, Christoph; Kocsis, Gábor; Szepesi, Tamás; Geiger, Joachim; Pedersen, Thomas Sunn; Gates, David; The W7-X Team
2017-04-01
The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m = 1/2 island chain in a specially designed magnetic configuration. The flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small ∼4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n = 1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m = 5/5 island chain. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world
Ghazi, Nicola G; Kirk, Tyler; Allam, Souha; Yan, Guofen
2009-07-01
To assess error indicators encountered during optical coherence tomography (OCT) automated retinal thickness measurement (RTM) in neovascular age-related macular degeneration (NVAMD) before and after bevacizumab (Avastin; Genentech Inc, South San Francisco, California, USA) treatment. Retrospective observational cross-sectional study. Each of the 6 radial lines of a single Stratus fast macular OCT study before and 3 months following initiation of treatment in 46 eyes with NVAMD, for a total of 552 scans, was evaluated. Error frequency was analyzed relative to the presence of intraretinal, subretinal (SR), and subretinal pigment epithelial (SRPE) fluid. In scans with edge detection kernel (EDK) misplacement, manual caliper measurement of the central macular (CMT) and central foveal (CFT) thicknesses was performed and compared to the software-generated values. The frequency of the various types of error indicators, the risk factors for error, and the magnitude of automated RTM error were analyzed. Error indicators were found in 91.3% and 71.7% of eyes before and after treatment, respectively (P = .013). Suboptimal signal strength was the most common error indicator. EDK misplacement was the second most common type of error prior to treatment and the least common after treatment (P = .005). Eyes with SR or SRPE fluid were at the highest risk for error, particularly EDK misplacement (P = .039). There was a strong association between the software-generated and caliper-generated CMT and CFT measurements. The software overestimated measurements by up to 32% and underestimated them by up to 15% in the presence of SR and SRPE fluid, respectively. OCT errors are very frequent in NVAMD. SRF is associated with the highest risk and magnitude of error in automated CMT and CFT measurements. Manually adjusted measurements may be more reliable in such eyes.
Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole
2009-11-01
This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.
Directory of Open Access Journals (Sweden)
Wu Weijiang
2016-01-01
Full Text Available The principles of the active electronic current transformer (ECT are introduced, and the mechanism of how a proximity magnetic field can influence the measuring of errors is analyzed from the perspective of the sensor section of the ECT. The impacts on active ECTs created by three-phase proximity magnetic field with invariable distance and variable distance are simulated and analyzed. The theory and simulated analysis indicate that the active ECTs are sensitive to proximity magnetic field under certain conditions. According to simulated analysis, a product structural design and the location of transformers at substation sites are suggested for manufacturers and administration of power supply, respectively.
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per...... sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons...
1988-10-31
measred r~qency resonse funcion andthCSymposium on Uynaniics and Control of Large measrenireueny rspone fncton&andtheFlexible Spacecraft, VPIESU...of large W. since the where model correction term d(t) remains virtually zero. The meaaurement-minus-estimate variance is much V - an weight matriz ...differential equations. Although the measurement error coveriance matriz . Rk’ is assumed to be known, it is strictly valid J = L.j(t).j2(t).t) (9) only for an
Sinha, Samiran
2009-08-10
We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.
Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations.
Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing
2017-01-25
Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO₂ emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30-40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO₂ and NO₂ emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2014-01-01
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...
Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S
2011-01-01
The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.
Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena
2017-01-13
Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence-steer-Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.
Errors in shearography measurements due to the creep of the PZT shearing actuator
Zastavnik, Filip; Pyl, Lincy; Sol, Hugo; Kersemans, Mathias; Van Paepegem, Wim
2014-08-01
Shearography is a modern optical interferometric measurement technique. It uses the interferometric properties of coherent laser light to measure deformation gradients on the µm m - 1 level. In the most common shearography setups, the ones employing a Michelson interferometer, the deformation gradients in both the x- and y-directions can be identified by setting angles on the shearing mirror. One of the mechanisms for setting the desired shearing angles in the Michelson interferometer is using the PZT actuators. This paper will reveal that the time-dependent creep behaviour of the PZT actuators is a major source of measurement errors. Measurements at long time spans suffer severely from this creep behaviour. Even for short time spans, which are typical for shearographic experiments, the creep behaviour of the PZT shear actuator induces considerable deviation in the measured response. In this paper the mechanism and the effect of PZT creep is explored and demonstrated with measurements. For long time-span measurements in shearography, noise is a limiting factor. Thus, the time-dependent evolution of noise is considered in this paper, with particular interest in the influence of external vibrations. Measurements with and without external vibration isolation are conducted and the difference between the two setups is analyzed. At the end of the paper some recommendations are given for minimizing and correcting the here-studied time-dependent effects.
Holliday, Katelyn M; Avery, Christy L; Poole, Charles; McGraw, Kathleen; Williams, Ronald; Liao, Duanping; Smith, Richard L; Whitsel, Eric A
2014-01-01
Although ambient concentrations of particulate matter ≤10 μm (PM10) are often used as proxies for total personal exposure, correlation (r) between ambient and personal PM10 concentrations varies. Factors underlying this variation and its effect on health outcome-PM exposure relationships remain poorly understood. We conducted a random-effects meta-analysis to estimate effects of study, participant, and environmental factors on r; used the estimates to impute personal exposure from ambient PM10 concentrations among 4,012 nonsmoking, participants with diabetes in the Women's Health Initiative clinical trial; and then estimated the associations of ambient and imputed personal PM10 concentrations with electrocardiographic measures, such as heart rate variability. We identified 15 studies (in years 1990-2009) of 342 participants in five countries. The median r was 0.46 (range = 0.13 to 0.72). There was little evidence of funnel plot asymmetry but substantial heterogeneity of r, which increased 0.05 (95% confidence interval = 0.01 to 0.09) per 10 µg/m increase in mean ambient PM10 concentration. Substituting imputed personal exposure for ambient PM10 concentrations shifted mean percent changes in electrocardiographic measures per 10 µg/m increase in exposure away from the null and decreased their precision, for example, -2.0% (-4.6% to 0.7%) versus -7.9% (-15.9% to 0.9%), for the standard deviation of normal-to-normal RR interval duration. Analogous distributions and heterogeneity of r in extant meta-analyses of ambient and personal PM2.5 concentrations suggest that observed shifts in mean percent change and decreases in precision may be generalizable across particle size.
Phalla, Thuch; Ota, Tetsuji; Mizoue, Nobuya; Kajisa, Tsuyoshi; Yoshida, Shigejiro; Vuthy, Ma; Heng, Sokh
2018-01-01
This study evaluated the uncertainty of individual tree biomass estimated by allometric models by both including and excluding tree height independently. Using two independent sets of measurements on the same trees, the errors in the measurement of diameter at breast height and tree height were quantified, and the uncertainty of individual tree biomass estimation caused by errors in measurement was calculated. For both allometric models, the uncertainties of the individual tree biomass estima...
Micklewright, John; Schnepf, Sylke V.; Silva, Pedro N.
2012-01-01
Investigation of peer effects on achievement with sample survey data on schools may mean that only a random sample of the population of peers is observed for each individual. This generates measurement error in peer variables similar in form to the textbook case of errors-in-variables, resulting in the estimated peer group effects in an OLS…
Control chart limits based on true process capability with consideration of measurement system error
Directory of Open Access Journals (Sweden)
Amara Souha Ben
2016-01-01
Full Text Available Shewhart X̅ and R control charts and process capability indices, proven to be effective tools in statistical process control are widely used under the assumption that the measurement system is free from errors. However, measurement variability is unavoidable and may be evaluated by the measurement system discrimination ratio (DR. This paper investigates the effects of measurement system variability evaluated by DR on the process capability indices Cp and Cpm, on the expected non conforming units of product per million (ppm, on the expected mean value of the Taguchi loss function (E(Loss and on the Shewhart charts properties. It is shown that when measurement system variability is neglected, an overestimation of ppm and underestimation of E(Loss are induced. Moreover, significant effects of the measurement variability on the control chart properties were made in evidence. Therefore, control charts limits calculation methods based on process real state were developed. An example is provided in order to compare the proposed limits with those traditionally calculated for Shewhart X̅, R charts.
Sim, Jae Hoon; Lauxmann, Michael; Chatzimichalis, Michail; Röösli, Christof; Eiber, Albrecht; Huber, Alexander M
2010-12-01
Previous studies have suggested complex modes of physiological stapes motions based upon various measurements. The goal of this study was to analyze the detailed errors in measurement of the complex stapes motions using laser Doppler vibrometer (LDV) systems, which are highly sensitive to the stimulation intensity and the exact angulations of the stapes. Stapes motions were measured with acoustic stimuli as well as mechanical stimuli using a custom-made three-axis piezoelectric actuator, and errors in the motion components were analyzed. The ratio of error in each motion component was reduced by increasing the magnitude of the stimuli, but the improvement was limited when the motion component was small relative to other components. This problem was solved with an improved reflectivity on the measurement surface. Errors in estimating the position of the stapes also caused errors on the coordinates of the measurement points and the laser beam direction relative to the stapes footplate, thus producing errors in the 3-D motion components. This effect was small when the position error of the stapes footplate did not exceed 5 degrees. Copyright © 2010 Elsevier B.V. All rights reserved.
Internal errors of ground-based terrestrial earthshine measurements in 5 colour bands.
Thejll, Peter; Gleisner, Hans; Flynn, Chris
2015-04-01
Measurements of earthshine intensity could be an important complement to satellite-based observations of terrestrial visual and near-IR radiative budgets because they are independent and relatively inexpensive to obtain and also offer different potentials for long-term bias stability. Using ground-based photometric instruments, the Moon is imaged several times a night through a range of photometric filters, and the ratio of the intensities of the dark (Earth-lit) and bright (Sun-lit) sides is calculated - this ratio is proportional to terrestrial albedo. Using forward modelling of the expected ratio, given assumptions about reflectance, single-scattering albedo, and light-scattering processes it is possible to deduce the terrestrial albedo. In this poster we present multicolour photometric results from observations on 10 nights, obtained at the NOAA observatory on Mauna Loa, Hawaii, in 2011. The Moon had different phases on these nights and we discuss in detail the behaviour of internal errors as a function of phase. The internal error is dependent on the photon-statistics of the images obtained and its magnitude is investigated by use of bootstrapping with replacement of observations. Results indicate that standard Johnson B and V band equivalent Lambert albedos can be obtained with precisions (1 standard deviation) in the 0.1 to 1% range for phases between 40 and 90 degrees. For longer wavelengths, corresponding to broader bands on either side of the 'Vegetation edge' at 750nm, we see larger variability in the albedo determinations and discuss whether these are due to atmospheric conditions or represent fast, intrinsic terrestrial albedo variations. The accuracy of these results, however, appear to depend on method choices, in particular the choice of lunar reflectance model -- this 'external error' will be investigated in future analyses.
Vidovič, Luka; Majaron, Boris
2013-03-01
Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS), in which spectrally broad illumination light is multiply scattered and homogenized. The measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light at the signal output port to account for illumination field. After replacing the white standard with test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, because test samples are invariably less reflective than the white standard, such a substitution modifies the illumination field inside the IS. This leads to underestimation of the sample's reflectivity and distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of much more complex dual-beam experimental setups, involving dedicated IS, literature states that only approximate corrections of SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical way to eliminate the SBSE using IS equipped with an additional "reference" output port. Two additional measurements performed at this port (of the white standard and sample, respectively) namely enable an accurate compensation for above described alteration of the illumination field. In addition, we analyze the dependency of SBSE on sample reflectivity and illustrate its impact on measurements of DRS in human skin with a typical IS.
Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi
2013-06-01
A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has
The quantification and correction of wind-induced precipitation measurement errors
Kochendorfer, John; Rasmussen, Roy; Wolff, Mareile; Baker, Bruce; Hall, Mark E.; Meyers, Tilden; Landolt, Scott; Jachcik, Al; Isaksen, Ketil; Brækkan, Ragnar; Leeper, Ronald
2017-04-01
Hydrologic measurements are important for both the short- and long-term management of water resources. Of the terms in the hydrologic budget, precipitation is typically the most important input; however, measurements of precipitation are subject to large errors and biases. For example, an all-weather unshielded weighing precipitation gauge can collect less than 50 % of the actual amount of solid precipitation when wind speeds exceed 5 m s-1. Using results from two different precipitation test beds, such errors have been assessed for unshielded weighing gauges and for weighing gauges employing four of the most common windshields currently in use. Functions to correct wind-induced undercatch were developed and tested. In addition, corrections for the single-Alter weighing gauge were developed using the combined results of two separate sites in Norway and the USA. In general, the results indicate that the functions effectively correct the undercatch bias that affects such precipitation measurements. In addition, a single function developed for the single-Alter gauges effectively decreased the bias at both sites, with the bias at the US site improving from -12 to 0 %, and the bias at the Norwegian site improving from -27 to -4 %. These correction functions require only wind speed and air temperature as inputs, and were developed for use in national and local precipitation networks, hydrological monitoring, roadway and airport safety work, and climate change research. The techniques used to develop and test these transfer functions at more than one site can also be used for other more comprehensive studies, such as the World Meteorological Organization Solid Precipitation Intercomparison Experiment (WMO-SPICE).
Joachim, Nichole; Rochtchina, Elena; Tan, Ava Grace; Hong, Thomas; Mitchell, Paul; Wang, Jie Jin
2012-08-07
Previous studies have reported high right-left eye correlation in retinal vessel caliber. We test the hypothesis that right-left correlation in retinal vessel caliber would be reduced in anisometropic compared with emmetropic children. Retinal arteriolar and venular calibers were measured in 12-year-old children. Three groups were selected: group 1, both eyes emmetropic (n = 214); group 2, right-left spherical equivalent refraction (SER) difference ≥1.00 but right-left SER difference ≥2.00 D (n = 32). Pearson's correlations between the two eyes were compared between group 1 and group 2 or 3. Associations between right-left difference in refractive error and right-left difference in caliber measurements were assessed using linear regression models. Right-left correlation in group 1 was 0.57 for central retinal arteriolar equivalent (CRAE) and 0.70 for central retinal venular equivalent (CRVE) compared with 0.60 and 0.82 for CRAE and CRVE, respectively, in group 2 (P = 0.42 and P = 0.08), and 0.36 and 0.52, respectively, in group 3 (P = 0.08 and P = 0.07, referenced to group 1). Each 1.00-D increase in right-left SER difference was associated with a 0.74-μm increase in mean CRAE difference (P = 0.02) and a 1.23-μm increase in mean CRVE difference between the two eyes (P = 0.002). Each 0.1-mm increase in right-left difference in axial length was associated with a 0.21-μm increase in the mean difference in CRAE (P = 0.01) and a 0.42-μm increase in the mean difference in CRVE (P < 0.0001) between the two eyes. Refractive error ≥2.00 D may contribute to variation in measurements of retinal vessel caliber.
Detection of microcalcifications in mammograms using error of prediction and statistical measures
Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.
2009-01-01
A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.
The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement
Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.
2012-01-01
This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
Directory of Open Access Journals (Sweden)
B. Torres
2013-08-01
Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2017-11-29
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Katz
2000-11-01
Utilizing the two-microphone impedance tube method, the normal incidence acoustic absorption and acoustic impedance can be measured for a given sample. This method relies on the measured transfer function between two microphones, and the knowledge of their precise location relative to each other and the sample material. In this article, a method is proposed to accurately determine these locations. A third sensor is added at the end of the tube to simplify the measurement. First, a justification and investigation of the method is presented. Second, reference terminations are measured to evaluate the accuracy of the apparatus. Finally, comparisons are made between the new method and current methods for determining these distances and the variations are discussed. From this, conclusions are drawn with regards to the applicability and need for the new method and under which circumstances it is applicable. Results show that the method provides a reliable determination of both microphone locations, which is not possible using the current techniques. Errors due to inaccurate determinination of these parameters between methods were on the order of 3% for R and 12% for Re Z.
Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise
Directory of Open Access Journals (Sweden)
Julio Illade-Quinteiro
2015-02-01
Full Text Available Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution.
Bressmann, Tim; Harper, Susan; Zhylich, Irina; Kulkarni, Gajanan V
2016-01-01
Outcomes of articulation therapy for rhotic errors are usually assessed perceptually. However, our understanding of associated changes of tongue movement is limited. This study described perceptual, durational and tongue displacement changes over 10 sessions of articulation therapy for /ɹ/ in six children. Four of the participants also received ultrasound biofeedback of their tongue shape. Speech and tongue movement were recorded pre-therapy, after 5 sessions, in the final session and at a one month follow-up. Perceptually, listeners perceived improvement and classified more productions as /ɹ/ in the final and follow-up assessments. The durations of VɹV syllables at the midway point of the therapy were longer. Cumulative tongue displacement increased in the final session. The average standard deviation was significantly higher in the middle and final assessments. The duration and tongue displacement measures illustrated how articulation therapy affected tongue movement and may be useful for outcomes research about articulation therapy.
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Directory of Open Access Journals (Sweden)
MATTHIAS ZIEGLER
2009-03-01
Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.
DEFF Research Database (Denmark)
Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel
2012-01-01
A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...... hemisphere. The extension of the valid region is achieved by the iterative application of atransformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave...... spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar...
Roy, Surupa; Banerjee, Tathagata
2009-06-01
A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and/or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.
Nord, G.; Martín-Vide, J. P.; Latron, J.; Soler, M.; Gallart, F.
2009-04-01
The Cal Rodó catchment (4.17km2) is located in a Mediterranean mountain area. Land cover is dominated by pastures and forest and badlands represent 2.8% of the surface of the catchment. Elevation ranges between 1100m and 1650m and average annual precipitation is about 900mm with heterogeneous distribution along the year. Autumn and spring are the seasons with more precipitation. Flash floods are relatively frequent, especially in autumn and are associated with high sediment transport. The period of observation ranges from 1994 to 2008. Discharge is measured in a gauging station controlled by a two levels rectangular notch weir with two different widths and contraction conditions that ensure a unique relationship between flow depth and discharge. The structure, designed to flush sediment, enables to capture a wide range of discharge. Flow depth is measured using a pressure sensor. Instantaneous discharge was lower than 0.1m3/s approximately 95% of the time and higher than 0.5 m3/s approximately 1% of the time. The largest runoff event measured produced instantaneous discharge of approximately 10m3/s. The second level of the gauging station was rarely reached since it was flooded in average 1.5 times per year but the corresponding events contributed to approximately 60% of the sediment transport. The structure is efficient as it was never submerged over the observed period and sediment deposition was negligible but it has a complex shape that makes difficult to relate accurately water depth to discharge, especially for large runoff events. In situ measurement of discharge by current meters or chemical dilution during high water stages is very unfeasible due to the flashiness of the response. Therefore, a hydraulic physical model (scale 1:11) was set up and calibrated to improve the stage-discharge curve and estimate the measurement errors of discharge. Sources of errors taken into account in this study are related to the precision and calibration of the pressure
Reducing the impact of measurement errors in FRF-based substructure decoupling using a modal model
Peeters, P.; Manzato, S.; Tamarozzi, T.; Desmet, W.
2018-01-01
As the vibro-acoustic requirements of modern products become more stringent, the need for robust identification methods increases proportionally. Sometimes the identification of a component is greatly complicated by the presence of a supporting structure that cannot be removed during testing. This is where substructure decoupling finds its main applications. However, despite some recent advances in substructure decoupling, the number of successful applications has so far been limited. The main reason for this is the poor conditioning of the problem that tends to amplify noise and other measurement errors. This paper proposes a new approach that uses a modal model to filter the experimental frequency response functions (FRFs). This can reduce the impact of noise and mass loading considerably for decoupling applications and decrease the quality requirements for experimental data. Furthermore, based on the uncertainty of the observed eigenfrequencies, an arbitrary number of consistent (all FRFs exhibit exactly the same poles) FRF matrices can be generated that are all contained within the variation of the original measurement. This way, the variation that is observed within the measurement is taken into account. The result is a distribution of decoupled FRFs of which the average can be used as the decoupled FRF set while the spread on the results highlights the sensitivity or reliability of the obtained results. After briefly reintroducing the theory of FRF-based substructure decoupling, the main problems in decoupling are summarized. Afterwards, the new methodology is presented and tested on both numerical and experimental cases.
Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan
2016-03-01
Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.
Valuing urban open space using the travel-cost method and the implications of measurement error.
Hanauer, Merlin M; Reid, John
2017-08-01
Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
Obesity increases precision errors in total body dual-energy x-ray absorptiometry measurements.
Knapp, Karen M; Welsman, Joanne R; Hopkins, Susan J; Shallcross, Andrew; Fogelman, Ignac; Blake, Glen M
2015-01-01
Total body (TB) dual-energy X-ray absorptiometry (DXA) is increasingly being used to measure body composition in research and clinical settings. This study investigated the effect of body mass index (BMI) and body fat on precision errors for total and regional TB DXA measurements of bone mineral density, fat tissue, and lean tissue using the GE Lunar Prodigy (GE Healthcare, Bedford, UK). One hundred forty-four women with BMI's ranging from 18.5 to 45.9 kg/m(2) were recruited. Participants had duplicate DXA scans of the TB with repositioning between examinations. Participants were divided into 3 groups based on their BMI, and the root mean square standard deviation and the percentage coefficient of variation were calculated for each group. The root mean square standard deviation (percentage coefficient of variation) for the normal (obese (>30 kg/m²; n = 32) BMI groups, respectively, were total BMD (g/cm(2)): 0.009 (0.77%), 0.009 (0.69%), 0.011 (0.91%); total fat (g): 545 (2.98%), 486 (1.72%), 677 (1.55%); total lean (g): 551 (1.42%), 540 (1.34%), and 781 (1.68%). These results suggest that serial measurements in obese subjects should be treated with caution because the least significant change may be larger than anticipated. Copyright © 2015 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Prediction of rainfall intensity measurement errors using commercial microwave communication links
Directory of Open Access Journals (Sweden)
A. Zinevich
2010-10-01
Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.
In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.
The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.
The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple
Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano
2017-01-01
Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.
Rajdeep Grewal; Joseph A. Cote; Hans Baumgartner
2004-01-01
The literature on structural equation models is unclear on whether and when multicollinearity may pose problems in theory testing (Type II errors). Two Monte Carlo simulation experiments show that multicollinearity can cause problems under certain conditions, specifically: (1) when multicollinearity is extreme, Type II error rates are generally unacceptably high (over 80%), (2) when multicollinearity is between 0.6 and 0.8, Type II error rates can be substantial (greater than 50% and frequent...
DEFF Research Database (Denmark)
Wang, Z.; Lu, K.; Ye, Y.
2011-01-01
and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations....
Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.
1987-01-01
The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).
Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.
2012-12-01
During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
Directory of Open Access Journals (Sweden)
Francisco J. Casas
2015-08-01
Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S
2013-01-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was adminis...
Hartatik; Purnomo, Agus
2017-06-01
Direct observation results are often used to review the estimation model. However, actual data observation findings still need to be re-examined, because of measurement error factors (ME). In the regression modeling if X is a random variable with Measurement Error then the complicated calculation will not loose from application of Computer and Technology. As is the case for a review of the following model estimation, given data (Xi, Yi), then the regression model is Y i = g(X i ) + ɛ i where Xi is the element i from the predictor variables X and Yi is the element i of the response variable Y. The variable X is the predictor variables From the findings specific observations usually are constants, but generally found X which is a random variable variable or where Fixed value is not constant. In this case is called the regression model Regression Model with the measurement Errors. Purpose of this research are estimated nonparametric model approach with B-Spline Method to review regression with Measurement Errors are ignored and methods Iterative Conditional Mode (ICM) for review Model regression with measurement error.
DEFF Research Database (Denmark)
Santillan, Arturo Orozco; Jacobsen, Finn
2010-01-01
the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...
A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13
Holdzkom, David; Sumner, Brian; McMillen, Brad
2010-01-01
In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…
Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R
Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A
2016-01-01
An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....
Smith, G. L.; Bess, T. D.; Minnis, P.
1983-01-01
The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.
Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R
Sweeney, R. M.; Frassinetti, L.; Brunsell, P.; Fridström, R.; Volpe, F. A.
2016-12-01
An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of m/n = 1/-12, where m and n are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the modified Rutherford equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e.g. based on the best EF-cancelling RMP, resulting in the fastest TM rotation). A passive variant of this technique is also presented, where no RMPs are applied, and the EF phase is deduced.
Measurement-based analysis of error latency. [in computer operating system
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf
2017-06-01
Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.
Impact of shrinking measurement error budgets on qualification metrology sampling and cost
Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas
2014-04-01
When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.
Systematic Error Study for ALICE charged-jet v2 Measurement
Energy Technology Data Exchange (ETDEWEB)
Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-07-18
We study the treatment of systematic errors in the determination of v_{2} for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ^{2} according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ^{2} and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.
Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I
2015-10-13
Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission
Breed, Greg A; Severns, Paul M
2015-01-01
Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.
A study on fatigue measurement of operators for human error prevention in NPPs
Energy Technology Data Exchange (ETDEWEB)
Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)
2012-10-15
The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and
Energy Technology Data Exchange (ETDEWEB)
Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-07-01
Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.
Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components
Zhang, Saijuan
2011-01-06
There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole
Directory of Open Access Journals (Sweden)
Chao Ding
2016-11-01
Full Text Available Because of the advantages of low cost, large coverage and short revisit cycle, Landsat 8 images have been widely applied to monitor earth surface movements. However, there are few systematic studies considering the error source characteristics or the improvement of the deformation field accuracy obtained by Landsat 8 image. In this study, we utilize the 2013 Mw 7.7 Balochistan, Pakistan earthquake to analyze error spatio-temporal characteristics and elaborate how to mitigate error sources in the deformation field extracted from multi-temporal Landsat 8 images. We found that the stripe artifacts and the topographic shadowing artifacts are two major error components in the deformation field, which currently lack overall understanding and an effective mitigation strategy. For the stripe artifacts, we propose a small spatial baseline (<200 m method to avoid the stripe artifacts effect on the deformation field. We also propose a small radiometric baseline method to reduce the topographic shadowing artifacts and radiometric decorrelation noises. Those performances and accuracy evaluation show that these two methods are effective in improving the precision of deformation field. This study provides the possibility to detect subtle ground movement with higher precision caused by earthquake, melting glaciers, landslides, etc., with Landsat 8 images. It is also a good reference for error source analysis and corrections in deformation field extracted from other optical satellite images.
Salomon, L J; Bernard, M; Amarsy, R; Bernard, J P; Ville, Y
2009-05-01
To evaluate the impact of a 5-mm error in the measurement of crown-rump length (CRL) in a woman undergoing ultrasound and biochemistry sequential combined screening for Down syndrome. Based on existing risk calculation algorithms, we simulated the case of a 35-year-old-woman undergoing combined screening based on nuchal translucency (NT) measurement and early second-trimester maternal serum markers (human chorionic gonadotropin (hCG) and alpha-fetoprotein (AFP) expressed as multiples of the median (MoM)). Two measurement errors were considered (+ or - 5 mm), for four different CRLs (50, 60, 70 and 80 mm), with five different NT measurements (1, 1.5, 2, 2.5 and 3 mm) in a patient undergoing biochemistry testing at 14 + 4, 15, 16, 17 or 18 weeks' gestation. Four different values for each maternal serum marker were tested (1, 1.5, 2 and 2.5 MoM for hCG, and 0.5, 0.8, 1 and 1.5 MoM for AFP), leading to a total of 3200 simulations of the impact of measurement error. In all cases the ratio between the risk as assessed with or without the measurement error was calculated (measurement error-related risk ratio (MERR)). Over 3200 simulated cases, MERR ranged from 0.53 to 2.14. In 586 simulations (18.3%), it was 1.33. Based on a risk cut-off of 1/300, women would have been misclassified in 112 simulations (3.5%). This would go up to 33 (27.5%) out of the 120 simulations in women with 'borderline' risk, with 1.5 MoM for hCG and 0.5 MoM for AFP, and NT measurement of 1 or 2mm. Down syndrome screening may be highly sensitive to measurement errors in CRL. Quality control of CRL measurement should be performed together with quality control of NT measurement in order to provide the highest standard of care.
Tops, Mattie; Boksem, Maarten A. S.
2010-01-01
We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently
Directory of Open Access Journals (Sweden)
Breno Carvalho
2013-10-01
Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.
Lugtig, Peter; Toepoel, Vera
2016-01-01
Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer, or a smartphone. Because all these devices have different screen sizes and modes of data entry, measurement errors may differ between devices. Using
On the importance of Task 1 and error performance measures in PRP dual-task studies
Directory of Open Access Journals (Sweden)
Tilo eStrobach
2015-04-01
Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.
On the importance of Task 1 and error performance measures in PRP dual-task studies.
Strobach, Tilo; Schütz, Anja; Schubert, Torsten
2015-01-01
The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.
Hernan, Andrea; Philpot, Benjamin; Janus, Edward D; Dunbar, James A
2012-07-08
Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC) on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP). The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004-05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC) and Body Mass Index (BMI)) from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Overall, calculation errors impacted on eligibility in 18 cases (2.1%). Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC) in self-report were found in n = 90 (32.3%). These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = -0.226, p-value resulted in only 6% of individuals at high risk of diabetes being incorrectly categorised as moderate or low risk of diabetes. Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.
Directory of Open Access Journals (Sweden)
Hernan Andrea
2012-07-01
Full Text Available Abstract Background Error in self-reported measures of obesity has been frequently described, but the effect of self-reported error on recruitment into diabetes prevention programs is not well established. The aim of this study was to examine the effect of using self-reported obesity data from the Finnish diabetes risk score (FINDRISC on recruitment into the Greater Green Triangle Diabetes Prevention Project (GGT DPP. Methods The GGT DPP was a structured group-based lifestyle modification program delivered in primary health care settings in South-Eastern Australia. Between 2004–05, 850 FINDRISC forms were collected during recruitment for the GGT DPP. Eligible individuals, at moderate to high risk of developing diabetes, were invited to undertake baseline tests, including anthropometric measurements performed by specially trained nurses. In addition to errors in calculating total risk scores, accuracy of self-reported data (height, weight, waist circumference (WC and Body Mass Index (BMI from FINDRISCs was compared with baseline data, with impact on participation eligibility presented. Results Overall, calculation errors impacted on eligibility in 18 cases (2.1%. Of n = 279 GGT DPP participants with measured data, errors (total score calculation, BMI or WC in self-report were found in n = 90 (32.3%. These errors were equally likely to result in under- or over-reported risk. Under-reporting was more common in those reporting lower risk scores (Spearman-rho = −0.226, p-value Conclusions Overall FINDRISC was found to be an effective tool to screen and recruit participants at moderate to high risk of diabetes, accurately categorising levels of overweight and obesity using self-report data. The results could be generalisable to other diabetes prevention programs using screening tools which include self-reported levels of obesity.
Directory of Open Access Journals (Sweden)
Anwer Khurshid
2014-12-01
Full Text Available Measurement error effect on the power of control charts for zero truncated Poisson distribution and ratio of two Poisson distributions are recently studied by Chakraborty and Khurshid (2013a and Chakraborty and Khurshid (2013b respectively. In this paper, in addition to the expression for the power of control chart for ZTBD based on standardized normal variate is obtained, numerical calculations are presented to see the effect of errors on the power curve. To study the sensitivity of the monitoring procedure, average run length (ARL is also considered.
DEFF Research Database (Denmark)
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2008-01-01
A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna...... is reliable only within a certain portion of the visible region. Accordingly, the truncation error is reduced by extrapolating the remaining portion of the visible region by the Gerchberg-Papoulis iterative algorithm, exploiting a condition of spatial concentration of the fields on the antenna aperture plane...
DEFF Research Database (Denmark)
Martini, Enrica; Breinbjerg, Olav; Maci, Stefano
2006-01-01
by the antenna only within a certain region inside the visible range. Then, the truncation error is reduced by a Maxwellian continuation of the reliable portion of the spectrum: after back propagating the measured field to the antenna plane, a condition of spatial concentration of the primary field is exploited......A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series
Kane, Michael
2010-01-01
The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…
Measuring and detecting errors in occupational coding: an analysis of SHARE data
Belloni, M.; Brugiavini, A.; Meschi, E.; Tijdens, K.
2016-01-01
This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE) using a
Karanikas, Nektarios
2015-01-01
The paper presents a framework that through structured analysis of accident reports explores the differences between practice and academic literature as well amongst organizations regarding their views on human error. The framework is based on the hypothesis that the wording of accident reports
Correction of error in two-dimensional wear measurements of cemented hip arthroplasties
The, Bertram; Mol, Linda; Diercks, Ron L.; van Ooijen, Peter M. A.; Verdonschot, Nico
The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We
Kumar, R.
1977-01-01
Theoretical and experimental determinations of the emittance of soils and leaves are reviewed, and an error analysis of emittance and spectral emittance measurements is developed as an aid to remote sensing applications. In particular, an equation for the upper bound of the absolute error in an emittance determination is derived. The absolute error is found to decrease with an increase in contact temperature and to increase with an increase in environmental integrated radiant flux density. The difference between temperature and band radiance temperature is plotted as a function of emittance for the wavelength intervals 4.5 to 5.5 microns, 8 to 13.5 microns and 10.2 to 12.5 microns.
Energy Technology Data Exchange (ETDEWEB)
Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2011-10-15
The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety
Tooze, Janet A.; Troiano, Richard P.; Carroll, Raymond J.; Moshfegh, Alanna J.; Freedman, Laurence S.
2013-01-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999–2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40–69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999–2000). Valid estimates of participants’ total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level (“truth”). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32–0.41); attenuation factors (0.43–0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error–adjusted estimates of relationships between physical activity and disease. PMID:23595007
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin
2016-12-30
A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Measurements and their uncertainties a practical guide to modern error analysis
Hughes, Ifan G
2010-01-01
This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....
Use of graph theory measures to identify errors in record linkage.
Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B
2014-07-01
Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education
Francisco Moreira; Sousa, Rui M., ed. lit.; Celina P Leão; Anabela C Alves; Lima, Rui M.
2009-01-01
This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE) process implemented in the Integrated Master degree on Industrial Management and Engineering (IME) at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be ...
Directory of Open Access Journals (Sweden)
Xue Li
2015-01-01
Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.
Error Analysis of High Frequency Core Loss Measurement for Low-Permeability Low-Loss Magnetic Cores
DEFF Research Database (Denmark)
Niroumand, Farideh Javidi; Nymand, Morten
2016-01-01
in magnetic cores is B-H loop measurement where two windings are placed on the core under test. However, this method is highly vulnerable to phase shift error, especially for low-permeability, low-loss cores. Due to soft saturation and very low core loss, low-permeability low-loss magnetic cores are favorable...... in many of the high-efficiency high power-density power converters. Magnetic powder cores, among the low-permeability low-loss cores, are very attractive since they possess lower magnetic losses in compared to gapped ferrites. This paper presents an analytical study of the phase shift error in the core....... The analysis has been validated by experimental measurements for relatively low-loss magnetic cores with different permeability values....
Manin, Lionel; Michon, Guilhem; Rémond, Didier; Dufour, Regis
2009-01-01
Serpentine belt drives are often used in front end accessory drive of automotive engine. The accessories resistant torques are getting higher within new technological innovations as stater-alternator, and belt transmissions are always asked for higher capacity. Two kind of tensioners are used to maintain minimum tension that insure power transmission and minimize slip: dry friction or hydraulic tensioners. An experimental device and a specific transmission error measurement method have been u...
Kerr, Ava; Slater, Gary J; Byrne, Nuala
2017-02-01
Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.
Acosta, Alejandro
2012-01-01
Over the last decade, Mozambique has experienced drastic increases in food prices, with serious implications for householdsâ€™ real income. A deeper understanding of how food prices are spatially transmitted from global to domestic markets is thus fundamental for designing policy measures to reduce poverty and food insecurity. This study assesses the spatial transmission of white maize prices between South Africa and Mozambique using an asymmetric error correction model to estimate the speed ...
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Image registration error variance as a measure of overlay quality. [satellite data processing
Mcgillem, C. D.; Svedlow, M.
1976-01-01
When one image (the signal) is to be registered with a second image (the signal plus noise) of the same scene, one would like to know the accuracy possible for this registration. This paper derives an estimate of the variance of the registration error that can be expected via two approaches. The solution in each instance is found to be a function of the effective bandwidth of the signal and the noise, and the signal-to-noise ratio. Application of these results to LANDSAT-1 data indicates that for most cases, registration variances will be significantly less than the diameter of one picture element.
The Measure of Human Error: Direct and Indirect Performance Shaping Factors
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe
2007-08-01
The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.
Heavner, Karyn; Burstyn, Igor
2015-08-24
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Directory of Open Access Journals (Sweden)
Karyn Heavner
2015-08-01
Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Haddadi, H.; Belhabib, S.
2008-02-01
The aim of this work is to investigate the sources of errors related to digital image correlation (DIC) technique applied to strain measurements. The knowledge of such information is important before the measured kinematic fields can be exploited. After recalling the principle of DIC, some sources of errors related to this technique are listed. Both numerical and experimental tests, based on rigid-body motion, are proposed. These tests are simple and easy-to-implement. They permit to quickly assess the errors related to lighting, the optical lens (distortion), the CCD sensor, the out-of-plane displacement, the speckle pattern, the grid pitch, the size of the subset and the correlation algorithm. The errors sources that cannot be uncoupled were estimated by amplifying their contribution to the global error. The obtained results permit to address a classification of the error related to the used equipment. The paper ends by some suggestions proposed in order to minimize the errors.
Directory of Open Access Journals (Sweden)
Md. Moyazzem Hossain
2015-02-01
Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.
Napierala, Jeffrey; Denton, Nancy
2017-02-01
The American Community Survey (ACS) provides valuable, timely population estimates but with increased levels of sampling error. Although the margin of error is included with aggregate estimates, it has not been incorporated into segregation indexes. With the increasing levels of diversity in small and large places throughout the United States comes a need to track accurately and study changes in racial and ethnic segregation between censuses. The 2005-2009 ACS is used to calculate three dissimilarity indexes (D) for all core-based statistical areas (CBSAs) in the United States. We introduce a simulation method for computing segregation indexes and examine them with particular regard to the size of the CBSAs. Additionally, a subset of CBSAs is used to explore how ACS indexes differ from those computed using the 2000 and 2010 censuses. Findings suggest that the precision and accuracy of D from the ACS is influenced by a number of factors, including the number of tracts and minority population size. For smaller areas, point estimates systematically overstate actual levels of segregation, and large confidence intervals lead to limited statistical power.
Quantitative shearography: error reduction by using more than three measurement channels
Charrett, Thomas O. H.; Francis, Daniel; Tatam, Ralph P.
2011-01-01
Shearography is a noncontact optical technique used to measure surface displacement derivatives. Full surface strain characterization can be achieved using shearography configurations employing at least three measurement channels. Each measurement channel is sensitive to a single displacement gradient component defined by its sensitivity vector. A matrix transformation is then required to convert the measured components to the orthogonal displacement gradients required for q...
Hornung, Roman; Bernau, Christoph; Truntzer, Caroline; Wilson, Rory; Stadler, Thomas; Boulesteix, Anne-Laure
2015-11-04
In applications of supervised statistical learning in the biomedical field it is necessary to assess the prediction error of the respective prediction rules. Often, data preparation steps are performed on the dataset-in its entirety-before training/test set based prediction error estimation by cross-validation (CV)-an approach referred to as "incomplete CV". Whether incomplete CV can result in an optimistically biased error estimate depends on the data preparation step under consideration. Several empirical studies have investigated the extent of bias induced by performing preliminary supervised variable selection before CV. To our knowledge, however, the potential bias induced by other data preparation steps has not yet been examined in the literature. In this paper we investigate this bias for two common data preparation steps: normalization and principal component analysis for dimension reduction of the covariate space (PCA). Furthermore we obtain preliminary results for the following steps: optimization of tuning parameters, variable filtering by variance and imputation of missing values. We devise the easily interpretable and general measure CVIIM ("CV Incompleteness Impact Measure") to quantify the extent of bias induced by incomplete CV with respect to a data preparation step of interest. This measure can be used to determine whether a specific data preparation step should, as a general rule, be performed in each CV iteration or whether an incomplete CV procedure would be acceptable in practice. We apply CVIIM to large collections of microarray datasets to answer this question for normalization and PCA. Performing normalization on the entire dataset before CV did not result in a noteworthy optimistic bias in any of the investigated cases. In contrast, when performing PCA before CV, medium to strong underestimates of the prediction error were observed in multiple settings. While the investigated forms of normalization can be safely performed before CV, PCA
Tenan, Matthew S
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.
Directory of Open Access Journals (Sweden)
Giovanna Grossi
2017-06-01
Full Text Available Precipitation measurements by rain gauges are usually affected by a systematic underestimation, which can be larger in case of snowfall. The wind, disturbing the trajectory of the falling water droplets or snowflakes above the rain gauge, is the major source of error, but when tipping-bucket recording gauges are used, the induced evaporation due to the heating device must also be taken into account. Manual measurements of fresh snow water equivalent (SWE were taken in Alpine areas of Valtellina and Vallecamonica, in Northern Italy, and compared with daily precipitation and melted snow measured by manual precipitation gauges and by mechanical and electronic heated tipping-bucket recording gauges without any wind-shield: all of these gauges underestimated the SWE in a range between 15% and 66%. In some experimental monitoring sites, instead, electronic weighing storage gauges with Alter-type wind-shields are coupled with snow pillows data: daily SWE measurements from these instruments are in good agreement. In order to correct the historical data series of precipitation affected by systematic errors in snowfall measurements, a simple ‘at-site’ and instrument-dependent model was first developed that applies a correction factor as a function of daily air temperature, which is an index of the solid/liquid precipitation type. The threshold air temperatures were estimated through a statistical analysis of snow field observations. The correction model applied to daily observations led to 5–37% total annual precipitation increments, growing with altitude (1740 ÷ 2190 m above sea level, a.s.l. and wind exposure. A second ‘climatological‘ correction model based on daily air temperature and wind speed was proposed, leading to errors only slightly higher than those obtained for the at-site corrections.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-08-17
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Rekaya, R; Aggrey, S E
2015-03-01
A procedure for estimating residual feed intake (RFI) based on information used in feeding studies is presented. Koch's classical model consists of using fixed regressions of feed intake on metabolic BW and growth, and RFI is obtained as the deviation between the observed feed intake and the expected intake for an individual with a given weight and growth rate. Estimated RFI following such a procedure intrinsically suffers from the inability to separate true RFI from the sampling error. As the latter is never equal to 0, estimated RFI is always biased, and the magnitude of such bias depends on the ratio between the true RFI variance and the residual variance. Additionally, the classical approach suffers from its inability to dissect RFI into its biological components, being the metabolic efficiency (maintaining BW) and growth efficiency. To remedy these problems we proposed a procedure that directly models the individual animal variation in feed efficiency used for body maintenance and growth. The proposed model is an extension of Koch's procedure by assuming animal-specific regression coefficients rather than population-level parameters. To evaluate the performance of both models, a data simulation was performed using the structure of an existing chicken data set consisting of 2,289 records. Data was simulated using 4 ratios between the true RFI and sampling error variances (1:1, 2:1, 4:1, and 10:1) and 5 correlation values between the 2 animal-specific random regression coefficients (-0.95, -0.5, 0, 0.5, and 0.95). The results clearly showed the superiority of the proposed model compared to Koch's procedure under all 20 simulation scenarios. In fact, when the ratio was 1:1 and the true genetic correlation was equal to -0.95, the correlation between the true and estimated RFI for animals in the top 20% was 0.60 and 0.51 for the proposed and Koch's models, respectively. This is an 18% superiority for the proposed model. For the bottom 20% of animals in the ranking
Baxter, Lisa K; Wright, Rosalind J; Paciorek, Christopher J; Laden, Francine; Suh, Helen H; Levy, Jonathan I
2010-01-01
In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO(2)), fine particulate matter (PM(2.5)), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R(2) (0.3 to 0.4 for the previously developed multiple regression models for PM(2.5) and NO(2)) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g., EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research.
Power of tests for a dichotomous independent variable measured with error.
McCaffrey, Daniel F; Elliott, Marc N
2008-06-01
To examine the implications for statistical power of using predicted probabilities for a dichotomous independent variable, rather than the actual variable. An application uses 271,479 observations from the 2000 to 2002 CAHPS Medicare Fee-for-Service surveys. STUDY DESIGN AND DATA: A methodological study with simulation results and a substantive application to previously collected data. Researchers often must employ key dichotomous predictors that are unobserved but for which predictions exist. We consider three approaches to such data: the classification estimator (1); the direct substitution estimator (2); the partial information maximum likelihood estimator (3, PIMLE). The efficiency of (1) (its power relative to testing with the true variable) roughly scales with the square of one less the classification error. The efficiency of (2) roughly scales with the R(2) for predicting the unobserved dichotomous variable, and is usually more powerful than (1). Approach (3) is most powerful, but for testing differences in means of 0.2-0.5 standard deviations, (2) is typically more than 95 percent as efficient as (3). The information loss from not observing actual values of dichotomous predictors can be quite large. Direct substitution is easy to implement and interpret and nearly as efficient as the PIMLE.
Dong, Li-hu; Li, Feng-ri; Jia, Wei-wei; Liu, Fu-xiang; Wang, He-zhi
2011-10-01
Based on the biomass data of 516 sampling trees, and by using non-linear error-in-variable modeling approach, the compatible models for the total biomass and the biomass of six components including aboveground part, underground part, stem, crown, branch, and foliage of 15 major tree species (or groups) in Heilongjiang Province were established, and the best models for the total biomass and components biomass were selected. The compatible models based on total biomass were developed by adopting the method of joint control different level ratio function. The heteroscedasticity of the models for total biomass was eliminated with log transformation, and the weighted regression was applied to the models for each individual component. Among the compatible biomass models established for the 15 major species (or groups) , the model for total biomass had the highest prediction precision (90% or more), followed by the models for aboveground part and stem biomass, with a precision of 87.5% or more. The prediction precision of the biomass models for other components was relatively low, but it was still greater than 80% for most test tree species. The modeling efficiency (EF) values of the total, aboveground part, and stem biomass models for all the tree species (or groups) were over 0.9, and the EF values of the underground part, crown, branch, and foliage biomass models were over 0.8.
The Errors Caused by Test Site Configuration at the Radiated Emission Measurement
Directory of Open Access Journals (Sweden)
Miki Bittera
2004-01-01
Full Text Available Nowadays, it is very important to know and to keep uncertainty of EMC measurements at low value to ensure the comparability of measurement results from different laboratories. This paper deals with analysis of uncertainties caused by improper test site configuration - especially by receiving antenna positioning. The analysis is performed at frequency range witch biconical broadband antenna works in and it is based on measurements. Nevertheless, it can be more simple to get results using theoretical analysis, but is does not include the test site properties.
Geise, Robert
2017-07-01
Any measurement of an electrical quantity, e.g. in network or spectrum analysis, is influenced by noise inducing a measurement uncertainty, the statistical quantification of which is rarely discussed in literature. A measurement uncertainty in such a context means a measurement error that is associated with a given probability, e.g. one standard deviation. The measurement uncertainty mainly depends on the signal-to-noise-ratio (SNR), but additionally can be influenced by the acquisition stage of the measurement setup. The analytical treatment of noise is hardly feasible as the physical nature of a noise vector needs to account for a certain magnitude and phase in a combined probability function. However, in a previous work a closed-form analytical solution for the uncertainties of amplitude and phase measurements depending on the SNR has been derived and validated. The derived formula turned out to be a good representation of the measured reality, though several approximations had to be made for the sake of an analytical expression. This contribution gives a physical interpretation on the approximations made and discusses the results in the context of the acquisition of measurement data.
Barlow, Matthew J; Oldroyd, Brian; Smith, Debbie; Lees, Matthew J; Brightmore, Amy; Till, Kevin; Jones, Benjamin; Hind, Karen
2015-01-01
Body composition analysis using dual-energy X-ray absorptiometry (DXA) is becoming increasingly popular in both clinical and sports science settings. Obesity, characterized by high fat mass (FM), is associated with larger precision errors; however, precision error for athletic groups with high levels of lean mass (LM) are unclear. Total (TB) and regional (limbs and trunk) body composition were determined from 2 consecutive total body scans (GE Lunar iDXA) with re-positioning in 45 elite male rugby league players (age: 21.8 ± 5.4 yr; body mass index: 27.8 ± 2.5 kg m(-1)). The root mean squared standard deviation (percentage co-efficient of variation) were TB bone mineral content: 24g (1.7%), TB LM: 321 g (1.6%), and TB FM: 280 g (2.3%). Regional precision values were superior for measurements of bone mineral content: 4.7-16.3 g (1.7-2.1%) and LM: 137-402 g (2.0-2.4%), than for FM: 63-299 g (3.1-4.1%). Precision error of DXA body composition measurements in elite male rugby players is higher than those reported elsewhere for normal adult populations and similar to those reported in those who are obese. It is advised that caution is applied when interpreting longitudinal DXA-derived body composition measurements in male rugby players and population-specific least significant change should be adopted. Copyright © 2015 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Non Linear Error Analysis from Orbit Measurements in SPS and RHIC
Cardona, Javier F
2005-01-01
Recently, an "action and phase" analysis of SPS orbits measurements proved to be sensitive to sextupole components intentionally activated at specific locations in the ring. In this paper we attempt to determine the strenght of such sextupoles from the measured orbits and compare them with the set values. Action and phase analysis of orbit trayectories generated by RHIC models with non linearities will also be presented and compare with RHIC experiments.
Directory of Open Access Journals (Sweden)
Kříž P
2017-03-01
Full Text Available Pavel Kříž,1 Šárka Skorkovská1,2 1Faculty of Medicine, Department of Ophthalmology and Optometry, Masaryk University, 2Eye Clinic NeoVize Brno, Brno, Czech Republic Purpose: Due to the expansion of modern optotype liquid crystal display with the help of positive polarization, measurement of heterophorias (HTFs by means of polarization, and thus partial dissociation of perceptions, has become more and more accessible. Our aims were to establish the prevalence of distance associated HTF by measuring with polarized Cross test of MKH [measuring and correcting methodology after H-J Haase] method and its association with age and refractive error in clinical population of wide age range. Methods: A cross-sectional study was carried out with 170 clinical subjects aged 15–78 years with an average age of 40.7±16.62 years. All the participants had best-corrected visual acuity better than 20/25, stereopsis ≤60 second of arc, no heterotropia, not undergone vision therapy, and had no eye disease. The distance associated HTF was measured with the Cross test of the MKH methodology. The quantification of associated HTF was acquired by means of Risley rotary prism. Results: The occurrence of distance associated HTF was found in 71.2% of participants. Of the total, 36.5% of the cases had esophoria (EP, 9.4% EP and hyperphoria, 10.6% exophoria (XP, 7.1% XP and hyperphoria, 7.6% hyperphoria, and 28.8% orthophoria. The mean distance horizontal associated HTF was +0.76±2.38 ∆. With EP, the mean value was +2.47±2.18 ∆, and with XP, −2.1±1.72 ∆. There was no correlation observed between the amount of distance associated HTF and age. There was no effect of the type and amount of a refractive error on the amount of distance associated HTF. Conclusion: A high occurrence of distance associated HTF was revealed while performing the polarized Cross test of MKH method. The relationship between the degree of associated HTF and refractive error and age
Estimating the Error of an Analog Quantum Simulator by Additional Measurements
Schwenk, Iris; Zanker, Sebastian; Reiner, Jan-Michael; Leppäkangas, Juha; Marthaler, Michael
2017-12-01
We study an analog quantum simulator coupled to a reservoir with a known spectral density. The reservoir perturbs the quantum simulation by causing decoherence. The simulator is used to measure an operator average, which cannot be calculated using any classical means. Since we cannot predict the result, it is difficult to estimate the effect of the environment. Especially, it is difficult to resolve whether the perturbation is small or if the actual result of the simulation is in fact very different from the ideal system we intend to study. Here, we show that in specific systems a measurement of additional correlators can be used to verify the reliability of the quantum simulation. The procedure only requires additional measurements on the quantum simulator itself. We demonstrate the method theoretically in the case of a single spin connected to a bosonic environment.
Propagation of positional measurement errors to agricultural field boundaries and associated costs
Bruin, de S.; Heuvelink, G.B.M.; Brown, J.D.
2008-01-01
It has been argued that the upcoming targeted approach to managing field operations, or precision farming, requires that field boundaries are measured with cm level accuracy, thus avoiding losses such as wasted inputs, unharvested crops and inefficient use of the land. This paper demonstrates a
ADC non-linear error corrections for low-noise temperature measurements in the LISA band
Energy Technology Data Exchange (ETDEWEB)
Sanjuan, J; Lobo, A; Mateos, N [Institut de Ciencies de l' Espai, CSIC, Fac. de Ciencies, Torre C5, 08193 Bellaterra (Spain); Ramos-Castro, J [Dep. Eng. Electronica, UPC, Campus Nord, Ed. C4, J Girona 1-3, 08034 Barcelona (Spain); DIaz-Aguilo, M, E-mail: sanjuan@ieec.fcr.e [Dep. Fisica Aplicada, UPC, Campus Nord, Ed. B4/B5, J Girona 1-3, 08034 Barcelona (Spain)
2010-05-01
Temperature fluctuations degrade the performance of different subsystems in the LISA mission. For instance, they can exert stray forces on the test masses and thus hamper the required drag-free accuracy. Also, the interferometric system performance depends on the stability of the temperature in the optical elements. Therefore, monitoring the temperature in specific points of the LISA subsystems is required. These measurements will be useful to identify the sources of excess noise caused by temperature fluctuations. The required temperature stability is still to be defined, but a figure around 10{mu}K Hz{sup -1/2} from 0.1 mHz to 0.1 Hz can be a good rough guess. The temperature measurement subsystem on board the LISA Pathfinder mission exhibits noise levels of 10{mu}K Hz{sup -1/2} for f >0.1 mHz. For LISA, based on the above hypothesis, the measurement system should overcome limitations related to the analog-to-digital conversion stage which degrades the performance of the measurement when temperature drifts. Investigations on the mitigation of such noise will be here presented.
From measurements errors to a new strain gauge design for composite materials
DEFF Research Database (Denmark)
Mikkelsen, Lars Pilgaard; Salviato, Marco; Gili, Jacopo
2015-01-01
Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...
Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.
2018-01-01
Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M.; Ángeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2010-01-01
In-vivo measurement of bone lead by means of K-X ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the mea...
Aerodynamical errors on tower mounted wind speed measurements due to the presence of the tower
Energy Technology Data Exchange (ETDEWEB)
Bergstroem, H. [Uppsala Univ. (Sweden). Dept. of Meteorology; Dahlberg, J.Aa. [Aeronautical Research Inst. of Sweden, Bromma (Sweden)
1996-12-01
Field measurements of wind speed from two lattice towers showed large differences for wind directions where the anemometers of both towers should be unaffected by any upstream obstacle. The wind speed was measured by cup anemometers mounted on booms along the side of the tower. A simple wind tunnel test indicates that the boom, for the studied conditions, could cause minor flow disturbances. A theoretical study, by means of simple 2D flow modelling of the flow around the mast, demonstrates that the tower itself could cause large wind flow disturbances. A theoretical study, based on simple treatment of the physics of motion of a cup anemometer, demonstrates that a cup anemometer is sensitive to velocity gradients across the cups and responds clearly to velocity gradients in the vicinity of the tower. Comparison of the results from the theoretical study and field tests show promising agreement. 2 refs, 8 figs
The effect of measurement error of phenotypes on genome wide association studies
Directory of Open Access Journals (Sweden)
Barendse William
2011-05-01
Full Text Available Abstract Background There is an unspoken assumption that imprecision of measurement of phenotypes will not have large systematic effects on the location of significant associations in a genome wide association study (GWAS. In this report, the effects of two independent measurements of the same trait, subcutaneous fat thickness, were examined in GWAS of 940 individuals. Results The trait values obtained by two independent groups working to the same trait definition were correlated with r = 0.72. The allele effects obtained from the two analyses were only moderately correlated, with r = 0.53, and there was one significant (P Conclusions It is recommended that trait values in GWAS experiments be examined for repeatability before the experiment is performed. For traits that do not have high repeatability (r
Chen, Yuan-Liu; Niu, Zengyuan; Matsuura, Daiki; Lee, Jung Chul; Shimizu, Yuki; Gao, Wei; Oh, Jeong Seok; Park, Chun Hong
2017-10-01
In this paper, a four-probe measurement system is implemented and verified for the carriage slide motion error measurement of a large-scale roll lathe used in hybrid manufacturing where a laser machining probe and a diamond cutting tool are placed on two sides of a roll workpiece for manufacturing. The motion error of the carriage slide of the roll lathe is composed of two straightness motion error components and two parallelism motion error components in the vertical and horizontal planes. Four displacement measurement probes, which are mounted on the carriage slide with respect to four opposing sides of the roll workpiece, are employed for the measurement. Firstly, based on the reversal technique, the four probes are moved by the carriage slide to scan the roll workpiece before and after a 180-degree rotation of the roll workpiece. Taking into consideration the fact that the machining accuracy of the lathe is influenced by not only the carriage slide motion error but also the gravity deformation of the large-scale roll workpiece due to its heavy weight, the vertical motion error is thus characterized relating to the deformed axis of the roll workpiece. The horizontal straightness motion error can also be synchronously obtained based on the reversal technique. In addition, based on an error separation algorithm, the vertical and horizontal parallelism motion error components are identified by scanning the rotating roll workpiece at the start and the end positions of the carriage slide, respectively. The feasibility and reliability of the proposed motion error measurement system are demonstrated by the experimental results and the measurement uncertainty analysis.
Error Modeling of a Fast Digital Integrator for Magnetic Measurements at CERN
Arpaia, Pasquale; Spiezia, Giovanni; Tiso, Stefano
2007-01-01
A statistical behavioral modeling approach for assessing dynamic metrological performance during the concept design of accurate digitizers is proposed. A surface-response approach based on statistical experiment design is exploited in order to avoid unrealistic hypothesis of linearity, optimize simulation effort, explore systematically operating conditions, and verify identification and validation uncertainty. An actual case study on the dynamic metrological characterization of a Fast Digital Integrator for highperformance magnetic measurements at European Organization for Nuclear Research (CERN) is presented.
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, J.; Asahi, T.; Takahashi, S.; Glazer, A.M.
1988-10-01
It was shown in the course of developing the high-accuracy universal polarimeter (HAUP) that any polarimetric optical analyses cannot be free from a systematic error ..gamma.. originating in the parasitic ellipticities of the constituent Nicol prisms. A method by which one can remove ..gamma.. in the HAUP method is presented. This has been successfully applied to measurements of the gyration tensors and birefringence of two enantiomorphic crystals of ..cap alpha..-quartz. It was found that the order of magnitude of this error for a polarizer is about 10/sup -4/ and consequently ..gamma.. lies between 10/sup -4/ and 10/sup -3/ in typical optical systems. Our results, g/sub 11/=5.7+-0.52x10/sup -5/ and g/sub 33/=-13.6+-0.52x10/sup -5/ with a wavelength of 6328 A for laevorotatory quartz at 300 K are in good agreement with some of the previous reports.
Giannini, John P; York, Andrew G; Shroff, Hari
2017-01-01
We describe a method to speed up microelectromechanical system (MEMS) mirror scanning by > 20x, while also improving scan accuracy. We use Landweber deconvolution to determine an input voltage which would produce a desired output, based on the measured MEMS impulse response. Since the MEMS is weakly nonlinear, the observed behavior deviates from expectations, and we iteratively improve our input to minimize this deviation. This allows customizable MEMS angle vs. time with <1% deviation from the desired scan pattern. We demonstrate our technique by optimizing a point scanning microscope's raster patterns to image mammal submandibular gland and pollen at ~10 frames/s.
DEFF Research Database (Denmark)
Gaynor, J. E.; Kristensen, Leif
1986-01-01
For pt.I see ibid., vol.3, no.3, p.523-8 (1986). The authors use the theoretical results presented in part I to correct turbulence parameters derived from monostatic sodar wind measurements in an attempt to improve the statistical comparisons with the sonic anemometers on the Boulder Atmospheric ...... and after the application of the spatial and temporal volume separation correction, are presented. The improvement appears to be significant. The effects of correcting for pulse volume averaging derived in part I are also discussed...
Directory of Open Access Journals (Sweden)
Adytia Darmawan
2016-12-01
Full Text Available Position estimation using WIMU (Wireless Inertial Measurement Unit is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update method that was using Filter Magnitude Acceleration (FMA, Variance Magnitude Acceleration (VMA and Angular Rate (AR estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and
Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K
2017-03-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.
Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K
2016-01-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900
Directory of Open Access Journals (Sweden)
Philip J Kellman
Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert
Redmond, Sean M
2016-02-01
The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders. Data from completed and ongoing research projects examining the relative value of different clinical markers for separating cases of specific language impairment from ADHD are presented. The best option for measuring core language impairments in a manner that does not potentially penalize individuals with ADHD is to focus assessment on key grammatical and verbal memory skills. Likewise, assessment of ADHD symptoms through standardized informant rating scales is optimized when they are adjusted for overlapping language and academic symptoms. As a collection, these clinical metrics set the stage for further examination of potential linkages between attention deficits and language impairments.
Effect of measurement error budgets and hybrid metrology on qualification metrology sampling
Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Osorio, Carmen; Archie, Chas
2014-10-01
Until now, metrologists had no statistics-based method to determine the sampling needed for an experiment before the start that accuracy experiment. We show a solution to this problem called inverse total measurement uncertainty (TMU) analysis, by presenting statistically based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk versus reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions and then to reveal why low sampling can lead to unstable and misleading results. One model is developed that can help experimenters minimize sampling costs. A second cost model reveals the inadequacy of some current sampling practices-and the enormous costs associated with sampling that provides reasonable levels of certainty in the result. We introduce the strategies on how to manage and mitigate these costs and begin the discussion on how fabs are able to manufacture devices using minimal reference sampling when qualifying metrology steps. Finally, the relationship between inverse TMU analysis and hybrid metrology is explored.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright Â© 2010 Elsevier Inc. All rights reserved.
Taub, Marc B.; Peyser, Thomas A.; Erik Rosenquist, J.
2007-01-01
Background A 5-day in-patient study designed to assess the accuracy of the FreeStyle Navigator® Continuous Glucose Monitoring System revealed that the level of accuracy of the continuous sensor measurements was dependent on the rate of glucose change. When the absolute rate of change was less than 1 mg•dl−1•min−1 (75% of the time), the median absolute relative difference (ARD) was 8.5%, with 85% of all points falling within the A zone of the Clarke error grid. When the absolute rate of change was greater than 2 mg•dl−1•min−1 (8% of the time), the median ARD was 17.5%, with 59% of all points falling within the Clarke A zone. Method Numerical simulations were performed to investigate effects of the rate of change of glucose on sensor measurement error. This approach enabled physiologically relevant distributions of glucose values to be reordered to explore the effect of different glucose rate-of-change distributions on apparent sensor accuracy. Results The physiological lag between blood and interstitial fluid glucose levels is sufficient to account for the observed difference in sensor accuracy between periods of stable glucose and periods of rapidly changing glucose. Conclusions The role of physiological lag on the apparent decrease in sensor accuracy at high glucose rates of change has implications for clinical study design, regulatory review of continuous glucose sensors, and development of performance standards for this new technology. This work demonstrates the difficulty in comparing accuracy measures between different clinical studies and highlights the need for studies to include both relevant glucose distributions and relevant glucose rate-of-change distributions. PMID:19885136
Directory of Open Access Journals (Sweden)
Paikar Fatima Mazhar Hameed
2016-02-01
Full Text Available The craziness of English spelling has undeniably perplexed learners, especially in an EFL context as in the Kingdom of Saudi Arabia. In these situations, among other obstacles, learners also have to tackle the perpetual and unavoidable problem of MT interference. Sadly, this perplexity takes the shape of a real problem in the language classroom where the English teacher has a tough time rationalizing with the learners why ‘cough’ is not spelt as /kuf/ or ‘knee’ has to do with a silent /k/. It is observed that students of English as second/foreign language in Saudi Arabia commit spelling errors that cause not only a lot of confusion to the teachers but also lower the self-esteem of the students concerned. The current study aims to identify the key problem areas as far as English spelling ability of Saudi EFL learners is concerned. It aims to also suggest remedial and pedagogical measures to improve the learners’ competence in this crucial, though hitherto, nascent skill area in the Saudi education system. Keywords: EFL; error-pattern, spelling instructions, orthography, phonology, vocabulary, language skills, language users
Directory of Open Access Journals (Sweden)
Amy Mizen
2015-11-01
Full Text Available The aim of this study was to quantify the error associated with different accessibility methods commonly used by public health researchers. Network distances were calculated from each household to the nearest GP our study area in the UK. Household level network distances were assigned as the gold standard and compared to alternate widely used accessibility methods. Four spatial aggregation units, two centroid types and two distance calculation methods represent commonly used accessibility calculation methods. Spearman's rank coefficients were calculated to show the extent which distance measurements were correlated with the gold standard. We assessed the proportion of households that were incorrectly assigned to GP for each method. The distance method, level of spatial aggregation and centroid type were compared between urban and rural regions. Urban distances were less varied from the gold standard, with smaller errors, compared to rural regions. For urban regions, Euclidean distances are significantly related to network distances. Network distances assigned a larger proportion of households to the correct GP compared to Euclidean distances, for both urban and rural morphologies. Our results, stratified by urban and rural populations, explain why contradicting results have been reported in the literature. The results we present are intended to be used aide-memoire by public health researchers using geographical aggregated data in accessibility research.
Measuring the effect of inter-study variability on estimating prediction error.
Directory of Open Access Journals (Sweden)
Shuyi Ma
Full Text Available The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in "batch-effects" and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies.Here we quantify the impact of these combined "study-effects" on a disease signature's predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV, which extracts random subsets of samples for testing, and inter-study validation (ISV, which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance.As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification.We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when "sufficient" diversity has been achieved for learning a
Directory of Open Access Journals (Sweden)
Sang-Wook Jin
2017-01-01
Full Text Available One of the most important issues in keeping membrane structures in stable condition is to maintain the proper stress distribution over the membrane. However, it is difficult to determine the quantitative real stress level in the membrane after the completion of the structure. The stress relaxation phenomenon of the membrane and the fluttering effect due to strong wind or ponding caused by precipitation may cause severe damage to the membrane structure itself. Therefore, it is very important to know the magnitude of the existing stress in membrane structures for their maintenance. The authors have proposed a new method for separately estimating the membrane stress in two different directions using sound waves instead of directly measuring the membrane stress. The new method utilizes the resonance phenomenon of the membrane, which is induced by sound excitations given through an audio speaker. During such experiment, the effect of the surrounding air on the vibrating membrane cannot be overlooked in order to assure high measurement precision. In this paper, an evaluation scheme for the added mass of membrane with the effect of air on the vibrating membrane and the correction of measurement error is discussed. In addition, three types of membrane materials are used in the experiment in order to verify the expandability and accuracy of the membrane measurement equipment.
Tops, Mattie; Boksem, Maarten A S
2010-12-01
We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently predicted self-reported persistence. We hypothesized that, during a prolonged monotonous task, absorption would predict initial ERN amplitudes, constraint would delay declines in ERN amplitudes and deterioration of performance, and drive for reward would predict left RFA when a reward could be obtained. Study 2, employing EEG recordings, confirmed our predictions. The results showed that most traits that have in previous research been related to ERN amplitudes have a relationship with the motivational trait persistence in common. In addition, trait-context combinations that are likely associated with increased engagement predict larger ERN amplitudes and RFA. Together, these results support the hypothesis that engagement may be a common underlying factor predicting ERN amplitude.
Directory of Open Access Journals (Sweden)
Giuseppe Arbia
2015-10-01
Full Text Available In many microeconometric models we use distances. For instance, in modelling the individual behavior in labor economics or in health studies, the distance from a relevant point of interest (such as a hospital or a workplace is often used as a predictor in a regression framework. However, in order to preserve confidentiality, spatial micro-data are often geo-masked, thus reducing their quality and dramatically distorting the inferential conclusions. In particular in this case, a measurement error is introduced in the independent variable which negatively affects the properties of the estimators. This paper studies these negative effects, discusses their consequences, and suggests possible interpretations and directions to data producers, end users, and practitioners.
Effect of age, sex, and refractive errors on central corneal thickness measured by Oculus Pentacam®
Directory of Open Access Journals (Sweden)
Hashmani N
2017-06-01
Full Text Available Nauman Hashmani,1 Sharif Hashmani,1 Azfar N Hanfi,1 Misbah Ayub,2 Choudhry M Saad,2 Hina Rajani,2 Marium G Muhammad,2 Misbahul Aziz1 1Department of Ophthalmology, Hashmanis Hospital, Karachi, Pakistan; 2Dow Medical College, Karachi, Pakistan Background: Central corneal thickness (CCT can be used to assess the corneal physiological condition as well as the pathological changes associated with ocular diseases. It has an influence on the measurement of intraocular pressure and is being used as a screening tool for refractive surgery candidates. The aim of this study was to determine the median CCT among normal Pakistani population and to correlate CCT with age, sex, and refractive errors.Methods: We conducted a retrospective analysis of 5,171 healthy eyes in 2,598 patients who came to Hashmanis Hospital, Karachi, Pakistan. The age of the patients ranged from 6 to 70 years. The refractive error was gauged by an auto-refractometer, and CCT was measured using Oculus Pentacam®.Results: The median CCT of our study was 541.0 µm with an interquartile range (IQR of 44.0 µm. The median age was 26.0 years (IQR: 8.0. Median spherical equivalent (SE of the patients was −4.3 D (IQR: 3.3 with the median sphere value as −4.0 D (IQR: 3.8. Lastly, the median cylinder was −1.0 D (IQR: 1.3. Age has a weak negative correlation with CCT (r=−0.058 and shows statistical significance (P<0.001. Additionally, males had thinner CCT readings than females (P=0.001. The cylinder values, on the other hand, had a significant (P=0.004 and positive correlation (r=0.154. Three values showed no significant correlation: sphere (P=0.100, SE (P=0.782, and the left or right eye (P=0.151.Conclusion: Among the Pakistani population, CCT was significantly affected by three variables: sex, age, and cylinder. No relationship of CCT was observed with the left or right eye, sphere, and SE. Keywords: refractive surgery, glaucoma, topography, sex, refractive errors, astigmatism
Kipnis, Victor
2009-03-03
Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24-hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food\\'s usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR-reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet-health outcome associations. Applying the method to data from the Eating at America\\'s Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context.
Nae, Jenny; Creaby, Mark W; Cronström, Anna; Ageberg, Eva
2017-09-01
To systematically review measurement properties of visual assessment and rating of Postural Orientation Errors (POEs) in participants with or without lower extremity musculoskeletal disorders. A systematic review according to the PRISMA guidelines was conducted. The search was performed in Medline (Pubmed), CINAHL and EMBASE (OVID) databases until August 2016. Studies reporting measurement properties for visual rating of postural orientation during the performance of weight-bearing functional tasks were included. No limits were placed on participant age, sex or whether they had a musculoskeletal disorder affecting the lower extremity. Twenty-eight articles were included, 5 of which included populations with a musculoskeletal disorder. Visual rating of the knee-medial-to-foot position (KMFP) was reliable within and between raters, and meta-analyses showed that this POE was valid against 2D and 3D kinematics in asymptomatic populations. Other segment-specific POEs showed either poor to moderate reliability or there were too few studies to permit synthesis. Intra-rater reliability was at least moderate for POEs within a task whereas inter-rater reliability was at most moderate. Visual rating of KMFP appears to be valid and reliable in asymptomatic adult populations. Measurement properties remain to be determined for POEs other than KMPF. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zeka, Ariana; Schwartz, Joel
2004-01-01
Misclassification of exposure usually leads to biased estimates of exposure–response associations. This is particularly an issue in cases with multiple correlated exposures, where the direction of bias is uncertain. It is necessary to address this problem when considering associations with important public health implications such as the one between mortality and air pollution, because biased exposure effects can result in biased risk assessments. The National Morbidity and Mortality Air Pollution Study (NMMAPS) recently reported results from an assessment of multiple pollutants and daily mortality in 90 U.S. cities. That study assessed the independent associations of the selected pollutants with daily mortality in two-pollutant models. Excess mortality was associated with particulate matter of aerodynamic diameter ≤10 μm/m3 (PM10), but not with other pollutants, in these two pollutant models. The extent of bias due to measurement error in these reported results is unclear. Schwartz and Coull recently proposed a method that deals with multiple exposures and, under certain conditions, is resistant to measurement error. We applied this method to reanalyze the data from NMMAPS. For PM10, we found results similar to those reported previously from NMMAPS (0.24% increase in deaths per 10-μg/m3 increase in PM10). In addition, we report an important effect of carbon monoxide that had not been observed previously. PMID:15579414
Energy Technology Data Exchange (ETDEWEB)
Langner, Andy Sven
2017-02-03
The Large Hadron Collider (LHC) is currently the world's largest particle accelerator with the highest center of mass energy in particle collision experiments. The control of the particle beam focusing is essential for the performance reach of such an accelerator. For the characterization of the focusing properties at the LHC, turn-by-turn beam position data is simultaneously recorded at numerous measurement devices (BPMs) along the accelerator, while an oscillation is excited on the beam. A novel analysis method for these measurements (N-BPM method) is developed here, which is based on a detailed analysis of systematic and statistical error sources and their correlations. It has been applied during the commissioning of the LHC for operation at an unprecedented energy of 6.5TeV. In this process a stronger focusing than its design specifications has been achieved. This results in smaller transverse beam sizes at the collision points and allows for a higher rate of particle collisions. For the derivation of the focusing parameters at many synchrotron light sources, the change of the beam orbit is observed, which is induced by deliberate changes of magnetic fields (orbit response matrix). In contrast, the analysis of turn-by-turn beam position measurements is for many of these machines less precise due to the distance between two BPMs. The N-BPM method overcomes this limitation by allowing to include the measurement data from more BPMs in the analysis. It has been applied at the ALBA synchrotron light source and compared to the orbit response method. The significantly faster measurement with the N-BPM method is a considerable advantage in this case. Finally, an outlook is given to the challenges which lie ahead for the control of the beam focusing at the HL-LHC, which is a future major upgrade of the LHC.
Apparicio, Philippe; Gelb, Jérémy; Dubé, Anne-Sophie; Kingham, Simon; Gauvin, Lise; Robitaille, Éric
2017-08-23
The potential spatial access to urban health services is an important issue in health geography, spatial epidemiology and public health. Computing geographical accessibility measures for residential areas (e.g. census tracts) depends on a type of distance, a method of aggregation, and a measure of accessibility. The aim of this paper is to compare discrepancies in results for the geographical accessibility of health services computed using six distance types (Euclidean and Manhattan distances; shortest network time on foot, by bicycle, by public transit, and by car), four aggregation methods, and fourteen accessibility measures. To explore variations in results according to the six types of distance and the aggregation methods, correlation analyses are performed. To measure how the assessment of potential spatial access varies according to three parameters (type of distance, aggregation method, and accessibility measure), sensitivity analysis (SA) and uncertainty analysis (UA) are conducted. First, independently of the type of distance used except for shortest network time by public transit, the results are globally similar (correlation >0.90). However, important local variations in correlation between Cartesian and the four shortest network time distances are observed, notably in suburban areas where Cartesian distances are less precise. Second, the choice of the aggregation method is also important: compared with the most accurate aggregation method, accessibility measures computed from census tract centroids, though not inaccurate, yield important measurement errors for 10% of census tracts. Third, the SA results show that the evaluation of potential geographic access may vary a great deal depending on the accessibility measure and, to a lesser degree, the type of distance and aggregation method. Fourth, the UA results clearly indicate areas of strong uncertainty in suburban areas, whereas central neighbourhoods show lower levels of uncertainty. In order to
Langner, Andy Sven; Rossbach, Jörg; Tomás, Rogelio
2017-02-17
The Large Hadron Collider (LHC) is currently the world's largest particle accelerator with the highest center of mass energy in particle collision experiments. The control of the particle beam focusing is essential for the performance reach of such an accelerator. For the characterization of the focusing properties at the LHC, turn-by-turn beam position data is simultaneously recorded at numerous measurement devices (BPMs) along the accelerator, while an oscillation is excited on the beam. A novel analysis method for these measurements ($N$-BPM method) is developed here, which is based on a detailed analysis of systematic and statistical error sources and their correlations. It has been applied during the commissioning of the LHC for operation at an unprecedented energy of 6.5 TeV. In this process a stronger focusing than its design specifications has been achieved. This results in smaller transverse beam sizes at the collision points and allows for a higher rate of particle collisions. For the derivation of ...
Koppenhaver, Shane L; Parent, Eric C; Teyhen, Deydre S; Hebert, Jeffrey J; Fritz, Julie M
2009-08-01
Clinical measurement, reliability study. To investigate the improvements in precision when averaging multiple measurements of percent change in muscle thickness of the transversus abdominis (TrA) and lumbar multifidus (LM) muscles. Although the reliability of TrA and LM muscle thickness measurements using rehabilitative ultrasound imaging (RUSI) is good, measurement error is often large relative to mean muscle thickness. Additionally, percent thickness change measures incorporate measurement error from both resting and contracted conditions. Thirty volunteers with nonspecific low back pain participated. Thickness measurements of the TrA and LM muscles were obtained using RUSI at rest and during standardized tasks. Percent thickness change was calculated with the formula thickness(contracted) - thickness(rest)/thickness(rest). Standard error of measurement (SEM) quantified precision when using 1 or a mean of 2 to 6 consecutive measurements. Compared to when using a single measurement, SEM of both the TrA and LM decreased by nearly 25% when using a mean of 2 measures, and by 50% when using the mean of 3 measures. Little precision was gained by averaging more than 3 measurements. When using RUSI to determine percent change in TrA and LM muscle thickness, intra examiner measurement precision appears to be optimized by using an average of 3 consecutive measurements.
James Elliott, C.; McVey, Brian D.; Quimby, David C.
1991-07-01
The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.
Goldman, Gretchen T.; Mulholland, James A.; Russell, Armistead G.; Gass, Katherine; Strickland, Matthew J.; Tolbert, Paige E.
2012-09-01
In recent years, geostatistical modeling has been used to inform air pollution health studies. In this study, distributions of daily ambient concentrations were modeled over space and time for 12 air pollutants. Simulated pollutant fields were produced for a 6-year time period over the 20-county metropolitan Atlanta area using the Stanford Geostatistical Modeling Software (SGeMS). These simulations incorporate the temporal and spatial autocorrelation structure of ambient pollutants, as well as season and day-of-week temporal and spatial trends; these fields were considered to be the true ambient pollutant fields for the purposes of the simulations that followed. Simulated monitor data at the locations of actual monitors were then generated that contain error representative of instrument imprecision. From the simulated monitor data, four exposure metrics were calculated: central monitor and unweighted, population-weighted, and area-weighted averages. For each metric, the amount and type of error relative to the simulated pollutant fields are characterized and the impact of error on an epidemiologic time-series analysis is predicted. The amount of error, as indicated by a lack of spatial autocorrelation, is greater for primary pollutants than for secondary pollutants and is only moderately reduced by averaging across monitors; more error will result in less statistical power in the epidemiologic analysis. The type of error, as indicated by the correlations of error with the monitor data and with the true ambient concentration, varies with exposure metric, with error in the central monitor metric more of the classical type (i.e., independent of the monitor data) and error in the spatial average metrics more of the Berkson type (i.e., independent of the true ambient concentration). Error type will affect the bias in the health risk estimate, with bias toward the null and away from the null predicted depending on the exposure metric; population-weighting yielded the
Leake, M. A.
1982-01-01
Planetary imagery techniques, errors in measurement or degradation assignment, and statistical formulas are presented with respect to cratering data. Base map photograph preparation, measurement of crater diameters and sampled area, and instruments used are discussed. Possible uncertainties, such as Sun angle, scale factors, degradation classification, and biases in crater recognition are discussed. The mathematical formulas used in crater statistics are presented.
... for You Agency for Healthcare Research and Quality: Medical Errors and Patient Safety Centers for Disease Control and ... Quality Chasm Series National Coordinating Council for Medication Error Reporting and Prevention ... Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ...
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Hofbauer, M.; Seiter, J.; Davidovic, M.; Zimmermann, H.
2013-04-01
Correlation based time-of-flight systems suffer from a temperature dependent distance measurement error induced by the illumination source of the system. A change of the temperature of the illumination source, results in the change of the bandwidth of the used light emitters, which are light emitting diodes (LEDs) most of the time. For typical illumination sources this can result in a drift of the measured distance in the range of ~20 cm, especially during the heat up phase. Due to the change of the bandwidth of the LEDs the shape of the output signal changes as well. In this paper we propose a method to correct this temperature dependent error by investigating this change of the shape of the output signal. Our measurements show, that the presented approach is capable of correcting the temperature dependent error in a large range of operation without the need for additional hardware.
Özdel, Kadir; Taymur, Ibrahim; Guriz, Seher Olga; Tulaci, Riza Gökcer; Kuru, Erkan; Turkcapar, Mehmet Hakan
2014-01-01
The Cognitive Distortions Scale was developed to assess thinking errors using case examples in two domains: interpersonal and personal achievement. Although its validity and reliability has been previously demonstrated in non-clinical samples, its psychometric properties and scoring has not yet been evaluated. The aim of the current study was to evaluate the psychometric properties of the Cognitive Distortions Scale in two Turkish samples and to examine the usefulness of the categorical scoring system. A total of 325 individuals (Sample 1 and Sample 2) were enrolled in this study to assess those psychometric properties. Our Sample 1 consisted of 225 individuals working as interns at the Diskapi Yildirim Beyazit Teaching and Research Hospital and Sample 2 consisted of 100 patients diagnosed with depression presenting to the outpatient unit of the same Hospital. Construct validity was assessed using the Beck Depression Inventory, the State Trait Anxiety Inventory, the Dysfunctional Attitude Scale, and the Automatic Thought Questionnaire. Factor analyses supported a one-factor model in these clinical and non-clinical samples. Cronbach's α values were excellent in both the non-clinical and clinical samples (0.933 and 0.918 respectively). Cognitive Distortions Scale scores showed significant correlation with relevant clinical measures. Study Cognitive Distortions Scale scores were stable over a time span of two weeks. This study showed that the Cognitive Distortions Scale is a valid and reliable measure in clinical and non-clinical populations. In addition, it shows that the categorical exists/does not exist scoring system is relevant and could be used in clinical settings.
Directory of Open Access Journals (Sweden)
Kadir Özdel
Full Text Available The Cognitive Distortions Scale was developed to assess thinking errors using case examples in two domains: interpersonal and personal achievement. Although its validity and reliability has been previously demonstrated in non-clinical samples, its psychometric properties and scoring has not yet been evaluated. The aim of the current study was to evaluate the psychometric properties of the Cognitive Distortions Scale in two Turkish samples and to examine the usefulness of the categorical scoring system. A total of 325 individuals (Sample 1 and Sample 2 were enrolled in this study to assess those psychometric properties. Our Sample 1 consisted of 225 individuals working as interns at the Diskapi Yildirim Beyazit Teaching and Research Hospital and Sample 2 consisted of 100 patients diagnosed with depression presenting to the outpatient unit of the same Hospital. Construct validity was assessed using the Beck Depression Inventory, the State Trait Anxiety Inventory, the Dysfunctional Attitude Scale, and the Automatic Thought Questionnaire. Factor analyses supported a one-factor model in these clinical and non-clinical samples. Cronbach's α values were excellent in both the non-clinical and clinical samples (0.933 and 0.918 respectively. Cognitive Distortions Scale scores showed significant correlation with relevant clinical measures. Study Cognitive Distortions Scale scores were stable over a time span of two weeks. This study showed that the Cognitive Distortions Scale is a valid and reliable measure in clinical and non-clinical populations. In addition, it shows that the categorical exists/does not exist scoring system is relevant and could be used in clinical settings.
Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J
2015-12-01
Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution. © 2015 Scandinavian Plant Physiology Society.
Cleffken, Berry; van Breukelen, Gerard; van Mameren, Henk; Brink, Peter; Olde Damink, Steven
2007-01-01
Increasingly, goniometry of elbow motion is used for qualification of research results. Expression of reliability is in parameters not suitable for comparison of results. We modified Bland and Altman's method, resulting in the smallest detectable differences (SDDs). Two raters measured elbow excursions in 42 individuals (144 ratings per test person) with an electronic digital inclinometer in a classical test-retest crossover study design. The SDDs were 0 +/- 4.2 degrees for active extension; 0 +/- 8.2 degrees for active flexion, both without upper arm fixation; 0 +/- 6.3 degrees for active extension; 0 +/- 5.7 degrees for active flexion; 0 +/- 7.4 degrees for passive flexion with upper arm fixation; 0 +/- 10.1 degrees for active flexion with upper arm retroflexion; and 0 +/- 8.5 degrees and 0 +/- 10.8 degrees for active and passive range of motion. Differences smaller than these SDDs found in clinical or research settings are attributable to measurement error and do not indicate improvement.
Directory of Open Access Journals (Sweden)
Moosang Kim
2013-01-01
Full Text Available Purpose: To evaluate frequency and severity of segmentation errors of two spectral-domain optical coherence tomography (SD-OCT devices and error effect on central macular thickness (CMT measurements. Materials and Methods: Twenty-seven eyes of 25 patients with neovascular age-related macular degeneration, examined using the Cirrus HD-OCT and Spectralis HRA + OCT, were retrospectively reviewed. Macular cube 512 × 128 and 5-line raster scans were performed with the Cirrus and 512 × 25 volume scans with the Spectralis. Frequency and severity of segmentation errors were compared between scans. Results: Segmentation error frequency was 47.4% (baseline, 40.7% (1 month, 40.7% (2 months, and 48.1% (6 months for the Cirrus, and 59.3%, 62.2%, 57.8%, and 63.7%, respectively, for the Spectralis, differing significantly between devices at all examinations (P < 0.05, except at baseline. Average error score was 1.21 ± 1.65 (baseline, 0.79 ± 1.18 (1 month, 0.74 ± 1.12 (2 months, and 0.96 ± 1.11 (6 months for the Cirrus, and 1.73 ± 1.50, 1.54 ± 1.35, 1.38 ± 1.40, and 1.49 ± 1.30, respectively, for the Spectralis, differing significantly at 1 month and 2 months (P < 0.02. Automated and manual CMT measurements by the Spectralis were larger than those by the Cirrus. Conclusions: The Cirrus HD-OCT had a lower frequency and severity of segmentation error than the Spectralis HRA + OCT. SD-OCT error should be considered when evaluating retinal thickness.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Boshuizen, H.C.; Lanti, M.; Menotti, A.; Moschandreas, J.; Tolonen, H.; Nissinen, A.; Nedeljkovic, S.; Kafatos, A.; Kromhout, D.
2008-01-01
The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality
Boshuizen, H.C.; Lanti, M.; Menotti, A.; Moschandreas, J.; Tolonen, H.; Nissinen, A.; Nedeljkovic, S.; Kafatos, A.; Kromhout, D.
2007-01-01
The authors aimed to quantify the effects of current systolic blood pressure (SBP) and serum total cholesterol on the risk of mortality in comparison with SBP or serum cholesterol 25 years previously, taking measurement error into account. The authors reanalyzed 35-year follow-up data on mortality
DEFF Research Database (Denmark)
Harsted, Steen; Mieritz, Rune M; Bronfort, Gert
2016-01-01
mechanical LBP, classified as either Quebec Task Force group 1, 2, 3 or 4 were included, and kinematics of the lumbar spine were sampled during standardized spinal lateral flexion and rotation motion using a 6-df instrumented spatial linkage system. Test-retest reliability and measurement error were...
Talma, H.; Chinapaw, M.J.M.; Bakker, B.; Hirasing, R.A.; Terwee, C.B.; Altenburg, T.M.
2013-01-01
Bioelectrical impedance analysis (BIA) is a practical method to estimate percentage body fat (%BF). In this systematic review, we aimed to assess validity, responsiveness, reliability and measurement error of BIA methods in estimating %BF in children and adolescents.We searched for relevant studies
Adhia, Divya Bharatkumar; Mani, Ramakrishnan; Milosavljevic, Stephan; Tumilty, Steve; Bussey, Melanie D
2016-02-01
Palpation-digitization technique for measurement of innominate motion involves repeated manual palpation-digitization of pelvic landmarks, which could introduce a systematic variation between subsequent trials and thereby influence final innominate angular measurement. The aim of this study is to quantify the effect of repeated palpation-digitization errors on overall variability of innominate vector length measurements; and to determine if there is a systematic variation between subsequent repeated trials. A single group repeated measures study, using four testers and fourteen healthy participants, was conducted. Four pelvic landmarks, left and right posterior superior iliac spine and anterior superior iliac spine, were palpated and digitized using 3D digitizing stylus of Polhemus electromagnetic tracking device, for ten consecutive trials by each tester in their random order. The ten individual trials of innominate vector lengths measured by each tester for each participant were used for the analysis. Repeated measures ANOVA demonstrated a very small effect of repeated trial factor (≤0.66%) as well as error component (≤0.32%) on innominate vector length variability. Further, residual versus order plots demonstrated a random pattern of errors across zero; thus indicating no systematic variation between subsequent trials of innominate vector length measurements. Copyright © 2015 Elsevier Ltd. All rights reserved.
African Journals Online (AJOL)
QuickSilver
Studies in the USA have shown that medical error is the 8th most common cause of death.2,3. The most common causes of medical error are:- administration of the wrong medication or wrong dose of the correct medication, using the wrong route of administration, giving a treatment to the wrong patient or at the wrong time.4 ...
Rosales, Roberto S; García-Gutierrez, Rayco; Reboso-Morales, Luis; Atroshi, Isam
2017-08-24
The Patient-Rated Wrist Evaluation (PRWE) is a widely used measure of patient-reported disability and pain related to wrist disorders. We performed cross-cultural adaptation of the PRWE into Spanish (Spain) and assessed reliability and construct validity in patients with distal radius fracture. Adaptation of the English version to Spanish (Spain) was performed using translation/back translation methodology. The measurement properties of the PRWE-Spanish were assessed in a sample of 40 consecutive patients (31 women), mean age 58 (SD 19) years, with extra-articular distal radius fractures treated with closed reduction and cast. The patients completed the PRWE-Spanish and the standard Spanish versions of the 11-item Disabilities of the Arm, Shoulder and Hand (QuickDASH) and EQ-5D questionnaires at baseline (health status before fracture) and at 8, 9, 12, and 13 weeks after treatment. Internal-consistency reliability was assessed with the Cronbach alpha coefficient and test-retest reliability with the intraclass correlation coefficient (ICC) comparing responses at 8 and 9 weeks and responses at 12 and 13 weeks. Cross-sectional precision was analyzed with the Standard Error of the Measurement (SEM). Longitudinal precision for test-retest reliability coefficient was analyzed with the Standard Error of the Measurement difference (SEMdiff) and the Minimal Detectable Change at 90% (MDC90) and 95% (MDC95) confidence levels. For assessing construct validity we hypothesized that the PRWE-Spanish (lower score indicates less disability and pain) would have strong positive correlation with the QuickDASH (lower score indicates less disability) and moderate negative correlation with the EQ-5D Index (higher score indicates better health); Spearman correlation coefficient (r) was used. For the PRWE total score, Cronbach alpha was 0.98 (SEM = 2.67) at baseline and 0.96 (SEM = 4.37) at 8 weeks. For test-retest reliability ICC was 0.94 (8 and 9 weeks) and 0.96 (12 and 13
DEFF Research Database (Denmark)
Lutz, Christina Maria; Poulsen, Per Rugaard; Fledelius, Walther
2016-01-01
). At every third treatment fraction, continuous portal images were acquired. The time-resolved chest wall position during treatment was compared with the planned position to determine the inter-fraction setup errors and the intra-fraction motion of the chest wall. RESULTS: The DIBH compliance was 95% during...... both recruitment periods. A tendency of smaller inter-fraction setup errors and intra-fraction motion was observed for group 2 (medial marker block position). However, apart from a significantly reduced inter-field random shift (σ = 1.7 mm vs. σ = 0.9 mm, p = 0.005), no statistically significant...... differences between the groups were found. In a combined analysis, the group mean inter-fraction setup error was M = - 0.1 mm, with random and systematic errors of σ = 1.7 mm and Σ = 1.4 mm. The group mean inter-field shift was M = 0.0 (σ = 1.3 mm and Σ = 1.1 mm) and the group mean standard deviation...
Lee, Paul H.
2017-01-01
Purpose: Some confounders are nonlinearly associated with dependent variables, but they are often adjusted using a linear term. The purpose of this study was to examine the error of mis-specifying the nonlinear confounding effect. Methods: We carried out a simulation study to investigate the effect of adjusting for a nonlinear confounder in the…
Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji
2013-08-01
We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.
Chow, M.-D.
1975-01-01
A direct inversion method for inverting the temperature profile from satellite-measured radiation is discussed. The nth power of the weighting function in the integral radiative-transfer equation is used as the weight in the averaging process. The vertical resolution of the inverted temperature profile and the response of the inverted temperature profile to the measurement errors are examined in terms of n. It is found that for smaller values of n, the vertical resolution and the effect of measurement errors are reduced. When n = 0, both the vertical resolution and error effect are minimum. The temperature profile is adjusted by a constant; any structure different from the initial shape cannot be resolved. This is equivalent to the case where the entire atmosphere is treated as one layer with a fixed shape of temperature profile. When n approaches infinity, both the vertical resolution and error effect are maximum. This is equivalent to the case where the entire atmosphere is divided into m (the number of spectral channels) layers. Within each layer, the temperatures are adjusted by a constant, and any structure different from the initial shape cannot be resolved. Also, the shape of the final solution is closer to the initial profile if the value of n is smaller.
Teruyama, Yuta; Watanabe, Takashi
2013-01-01
The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors.
Fletcher, R; Werner, O; Nordström, L; Jonson, B
1983-02-01
The Siemens-Elema CO2 Analyzer 930 allows calculation of carbon dioxide elimination from the instantaneous measurement of expired gas flow (VE) and carbon dioxide fraction (FECO2). VE is measured in the ventilator and FECO2 at the Y-piece. The most important source of error in the measurement of carbon dioxide elimination is rebreathing, which corresponds to about 24 ml of end-expiratory gas per breath with the standard Y-piece and tubing. This problem may be decreased by the use of non-return valves in the Y-piece. Allowance must be made for the effects of intermolecular interaction between carbon dioxide and the carrier gas, as the reading is about 20% greater with nitrous oxide than with oxygen. This problem can be largely circumvented by calibration with appropriate gas mixtures. Errors resulting from analyser delay are small, and are eliminated completely by the inclusion of fast electronic components. Carbon dioxide analysis is linear with air as carrier gas, but slightly alinear with nitrous oxide in oxygen mixtures. This error can be minimized by using calibration gases with a carbon dioxide content close to that of expired gas. The expiratory flow meter is linear if kept in good condition. Variations in temperature and water content of expired gas cause overestimation of mean expired carbon dioxide fraction (FECO2) by a factor of 1.01-1.02. Compressed gas in the tubing causes a small error which may be neglected at normal airway pressures with tubing of low compliance. Carbon dioxide measurement is slightly affected by barometric pressure. During mechanical ventilation of the lungs in 10 patients with air, FECO2 obtained after corrections for known errors agreed well with Scholander analysis of mixed expired gas.
Energy Technology Data Exchange (ETDEWEB)
Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan)
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
... halos around bright lights, squinting, headaches, or eye strain. Glasses or contact lenses can usually correct refractive errors. Laser eye surgery may also be a possibility. NIH: National Eye ...
Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine
2010-05-01
The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles
Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas
2013-01-01
In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.
American Society for Testing and Materials. Philadelphia
1999-01-01
1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Dobson, F; Hinman, R S; Hall, M; Marshall, C J; Sayer, T; Anderson, C; Newcomb, N; Stratford, P W; Bennell, K L
2017-11-01
To estimate the reliability and measurement error of performance-based tests of physical function recommended by the Osteoarthritis Research Society International (OARSI) in people with hip and/or knee osteoarthritis (OA). Prospective repeated measures between independent raters within a session and within-rater over a week interval. Relative reliability was estimated for 51 people with hip and/or knee OA (mean age 64.5 years, standard deviation (SD) 6.21 years; 47% females; 36 (70%) primary knee OA) on the 30s Chair Stand Test (30sCST), 40m Fast-Paced Walk Test (40mFPWT), 11-Stair Climb Test (11-step SCT), Timed Up and Go (TUG), Six-Minute Walk Test (6MWT), 10m Fast-Paced Walk Test (10mFPWT) and 20s Stair Climb Test (20sSCT) using intra-class correlation coefficients (ICC). Absolute reliability was calculated using standard error of measurement (SEM) and minimal detectable change (MDC). Measurement error was acceptable (SEM tests. Between-rater reliability was: optimal (ICC > 0.9, lower 1-sided 95% CI > 0.7) for the 40mFPWT, 6MWT and 10mFPWT; sufficient (ICC >0.8, lower 1-sided 95% CI > 0.7) for 30sCST, 20sSCT; unacceptable (lower 1-side 95% CI reliability was optimal for 40mFPWT, and 6MWT; sufficient for 30sCST and 10mFPWT and unacceptable for 11-step SCT, TUG and 20sSCT. The 30sCST, 40mFPWT, 6MWT and 10mFPWT, demonstrated, at minimum, acceptable levels of both between and within-rater reliability and measurement error. All tests demonstrated sufficiently small measurement error indicating they are adequate for measuring change over time in individuals with knee/hip OA. Copyright © 2017 Osteoarthritis Research Society International. All rights reserved.
Kunkl, Annalisa; Risso, Domenico; Terranova, Maria Paola; Girotto, Mauro; Brando, Bruno; Mortara, Lorenzo; Lantieri, Pasquale Bruno
2002-04-15
We addressed the definition of limits of error of %CD4+ and CD4+ counts (AbsCD4+) typical of laboratories of excellence, as well as the grading of laboratories based on the decision to take these limits as boundaries of unacceptable data. We studied the 99.9% confidence intervals of the means of 24 human immunodeficiency virus (HIV)+ and HIV- blood samples analyzed by 18 laboratories of the Liguria Region Quality Assessment Program (Liguria Region QALI). Regression equations of lower (L1) and upper (L2) confidence limits over the means of data cleared of unusual results were used to interpolate limits of error for mean values in the tested range. L1 and L2 were symmetric around the mean and a single absolute difference (Abs Res) between the limits and the mean was found. Abs Res significantly increased over mean values (P = 0.0005 for %CD4+, P < 0.0001 for AbsCD4+). Limits were compatible with errors shown with blind replicates. Unacceptable results, outside the limits, accounted for 25% and 30% of %CD4+ and for 18% and 35% AbsCD4+ in the Liguria Region QALI and in the Piemonte Region QA Program, respectively. Limits interpolated over the median showed a similar grading. A comparable fraction of unacceptable data was also found with the method used in the U.K. National External Quality Assessment Scheme (NEQAS) immune monitoring scheme. We propose the general use of these regression equations to determine bounds for unacceptable data in proficiency testing and to identify laboratories of excellence. Published 2002 Wiley-Liss, Inc.
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes (n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The
Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf
2016-10-14
All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.
Energy Technology Data Exchange (ETDEWEB)
Lauer-Peccoud, M.R
1998-12-31
In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by {integral} (G) + {xi} where f is an increasing function such that the value {integral} (g{sub 0}) is known and {xi} is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function {integral} is linear and the error {xi} and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author) 42 refs.
Energy Technology Data Exchange (ETDEWEB)
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A., E-mail: jacksona@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)
2013-12-15
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup
Directory of Open Access Journals (Sweden)
Ruy Laurenti
1975-12-01
Full Text Available Dentre os indicadores de saúde tradicionalmente utilizados a mortalidade infantil destaca-se como um dos mais importantes. Frequentemente é utilizada por profissionais de saúde pública na caracterização do nível de saúde e em avaliações de programas. Existem, porém, vários fatores de erros que afetam o seu valor e dentre esses são destacados: a definição dos nascidos vivos e sua aplicação na prática, o sub-registro de óbito e de nascimento, o registro do óbito por local de ocorrência, a definição de nascido vivo no ano e a declaração errada na idade. Existem também erros qualitativos que dizem respeito, principalmente, a declarações erradas da causa de morte. Vários desses fatores foram medidos para São Paulo.Among the traditionally used health indices the infant mortality rate is distinguished as the most important one. Frequently it is used by the public health professionals for health level characterization and for the evaluation of programmes. There are, however, several error factors that affect its value, among which are the live birth definition and its true use; underregistration of deaths and births; the death register by place of occurrence; live birth definition in the year, and the wrong age information. There are also qualitative errors due to wrong information as regards the causes of death. Several of these factors were discussed for S. Paulo.
Rosner, Bernard; Glynn, Robert J
2007-02-10
The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations. 2006 John Wiley & Sons, Ltd.
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Skylab water balance error analysis
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Directory of Open Access Journals (Sweden)
Roya Dokoohaki
2015-03-01
Full Text Available Background: The global prevalence of hypertension is one billion persons resulting in about 7.1 million deaths per year. In Iran, no statics of hypertension is available, but it has been reported that 40% of deaths are caused by cardiovascular diseases. Measuring blood pressure is one of the basic principles of medical examinations; however, the quality of the scientific standards of its technique is not highly observed. Objectives: The present study aimed to evaluate the frequency of errors of measuring blood pressure among B.Sc. nurses working at in government hospitals of Shiraz, Iran. Materials and Methods: This descriptive-analytical, cross-sectional study was conducted on 250 nurses selected from various wards. The study data were collected using the standard check list of blood pressure measurement, technique of American Heart Association (AHA guideline, and a questionnaire containing questions regarding knowledge about blood pressure measurement skills. Results: This study showed that 54.0% and 78.0% of the participants obtained moderate scores (50 - 74.99 in the theoretical and practical tests, respectively. The results of Pearson’s correlation coefficient demonstrated no significant relationship between the scores of theory and practice (P > 0.05. Most of the errors in measuring blood pressure in this research consisted of not measuring blood pressure at two stages, not observing its preparations and the proper time interval between the two stages, and not observing the measuring arrangements according to the checklist. Conclusions: Considering the participants’ theory and practice scores, it was concluded that evaluation of blood pressure measurement, which is an important basis in diagnosis and treatment, should be considered as an educational priority of health teams.
Yanez Rausell, L.; Malenovsky, Z.; Clevers, J.G.P.W.; Schaepman, M.E.
2014-01-01
We present uncertainties associated with the measurement of coniferous needle-leaf optical properties (OPs) with an integrating sphere using an optimized gap-fraction (GF) correction method, where GF refers to the air gaps appearing between the needles of a measured sample. We used an optically
van de Ridder, Bert; Hakvoort, Wouter; van Dijk, Johannes; Lötters, Joost Conrad; de Boer, Andries
2014-01-01
In this paper the influence of external vibrations on the measurement value of a Coriolis mass-flow meter (CMFM) for low flows is investigated and quantified. Model results are compared with experimental results to improve the knowledge on how external vibrations affect the mass-flow measurement
Inoue, Mitsuhiro; Shiomi, Hiroya; Iwata, Hiromitsu; Taguchi, Junichi; Okawa, Kohei; Kikuchi, Chie; Inada, Kosaku; Iwabuchi, Michio; Murai, Taro; Koike, Izumi; Tatewaki, Koshi; Ohta, Seiji; Inoue, Tomio
2015-01-08
The accuracy of the CyberKnife Synchrony Respiratory Tracking System (SRTS) is considered to be patient-dependent because the SRTS relies on an individual correlation between the internal tumor position (ITP) and the external marker position (EMP), as well as a prediction method to compensate for the delay incurred to adjust the position of the linear accelerator (linac). We aimed to develop a system for obtaining pretreatment statistical measurements of the SRTS tracking error by using beam's eye view (BEV) images, to enable the prediction of the patient-specific accuracy. The respiratory motion data for the ITP and the EMP were derived from cine MR images obtained from 23 patients. The dynamic motion phantom was used to reproduce both the ITP and EMP motions. The CyberKnife was subsequently operated with the SRTS, with a CCD camera mounted on the head of the linac. BEV images from the CCD camera were recorded during the tracking of a ball target by the linac. The tracking error was measured at 15 Hz using in-house software. To assess the precision of the position detection using an MR image, the positions of test tubes (determined from MR images) were compared with their actual positions. To assess the precision of the position detection of the ball, ball positions measured from BEV images were compared with values measured using a Vernier caliper. The SRTS accuracy was evaluated by determining the tracking error that could be identified with a probability of more than 95% (Ep95). The detection precision of the tumor position (determined from cine MR images) was < 0.2 mm. The detection precision of the tracking error when using the BEV images was < 0.2mm. These two detection precisions were derived from our measurement system and were not obtained from the SRTS. The median of Ep95 was found to be 1.5 (range, 1.0-3.5) mm. The difference between the minimum and maximum Ep95 was 2.5mm, indicating that this provides a better means of evaluating patient-specific SRTS
Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn
2013-01-01
The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test–retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID). PMID:24040116
Directory of Open Access Journals (Sweden)
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
[The error, source of learning].
Joyeux, Stéphanie; Bohic, Valérie
2016-05-01
The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Rapid mapping of volumetric errors
Energy Technology Data Exchange (ETDEWEB)
Krulewich, D.; Hale, L.; Yordy, D.
1995-09-13
This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.
Directory of Open Access Journals (Sweden)
Priteshkumar Sureshchand Ganna
2014-01-01
Full Text Available Aim and objective: The aim of this study was to compare the cephalometric measurements using Nemoceph software with manual tracings. Materials and methods: The sample consisted of 60 lateral Cephalometric radiographs of patients randomly selected from the existing records of patients of Department of Orthodontics and Dentofacial Orthopedics, KVG Dental College and Hospital, Sullia, Dakshina Kannada. Nineteen angular and 11 linear measurements were analyzed on each radiograph. All the lateral cephalographs were hand-traced and the same Cephalographs were then scanned and were then digitally traced with Nemoceph software. The results were then tabulated in Microsoft excel. The level of significance (p-value was 0.05 and was set at p < 0.05. Paired t-test was performed using SPSS software for comparison between tracing done by manual method and by Nemoceph software. Results: Significant differences were found between the two methods for five (four angular and one linear out of 30 measurements. Those five were saddle angle, articular angle, upper lip to E-Line, Frankfort horizontal to lower incisor axis angle and lower incisor axis to mandibular plane angle. Conclusion: Both angular and linear measurements were accurate and reliable. Except, few measurements showing highly significant differences, the validity of the measurements with the Nemoceph software and with the conventional method were highly correlated.
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs
Energy Technology Data Exchange (ETDEWEB)
Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J. [Laboratoire de Physique Nucléaire et des Hautes Énergies, Université Pierre et Marie Curie Paris 6, Université Paris Diderot Paris 7, CNRS-IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C.; Chotard, N.; Copin, Y.; Gangler, E. [Université de Lyon, Université Lyon 1, CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, 69622 Villeurbanne (France); Feindt, U.; Kerschhaggl, M.; Kowalski, M. [Physikalisches Institut, Universität Bonn, Nußallee 12, D-53115 Bonn (Germany); and others
2015-02-10
We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Bunt, Fabian van de [VU University Medical Center, Amsterdam (Netherlands); Pearl, Michael L.; Lee, Eric K.; Peng, Lauren; Didomenico, Paul [Kaiser Permanente, Los Angeles, CA (United States)
2015-11-15
Recent studies have challenged the accuracy of conventional measurements of glenoid version. Variability in the orientation of the scapula from individual anatomical differences and patient positioning, combined with differences in observer measurement practices, have been identified as sources of variability. The purpose of this study was to explore the utility and reliability of clinically available software that allows manipulation of three-dimensional images in order to bridge the variance between clinical and anatomic version in a clinical setting. Twenty CT scans of normal glenoids of patients who had proximal humerus fractures were measured for version. Four reviewers first measured version in a conventional manner (clinical version), measurements were made again (anatomic version) after employing a protocol for reformatting the CT data to align the coronal and sagittal planes with the superior-inferior axis of the glenoid, and the scapular body, respectively. The average value of clinical retroversion for all reviewers and all subjects was -1.4 (range, -16 to 21 ), as compared to -3.2 (range, -21 to 6 ) when measured from reformatted images. The mean difference between anatomical and clinical version was 1.9 ± 5.6 but ranged on individual measurements from -13 to 26 . In no instance did all four observers choose the same image slice from the sequence of images. This study confirmed the variation in glenoid version dependent on scapular orientation previously identified in other studies using scapular models, and presents a clinically accessible protocol to correct for scapular orientation from the patient's CT data. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Yukawa, Osamu [Hiroshima General Hospital (Japan)
2001-02-01
A new method termed the opposite directional flow-encoding (ODFE) technique is proposed to increase the accuracy and the reproducibility of phase-contrast flow measurements by correcting the non-linear background of velocity images induced by concomitant magnetic fields (Maxwell terms). In this technique, the volume flow rate is calculated from the difference of two region of interest (ROI) values derived from two velocity images obtained by reversing the flow-encoding direction. To evaluate the technique, various phantom experiments were carried out and volume blood flow rates of internal carotid arteries (ICAs) were measured in four volunteers. The technique could measure the volume flow rates of the phantom with higher accuracy (mean absolute percentage error =1.04%) and reproducibility (coefficient of variation =1.18%) than conventional methods. Flow measurements with the technique was not significantly affected by ROI size variation, measuring position, and flow obliquity not exceeding 30 deg. The volume flow rates in the ICAs of a volunteer were measured with high reproducibility (coefficient of variation =2.89% on the right, 1.48% on the left), and the flow measurement was not significantly affected by ROI size variation. The ODFE technique can minimize the effect of the non-linear background due to Maxwell terms. The technique allows use of ROIs of approximate size including the flow signal and provides accurate and objective phase-contrast flow measurements. (author)
DEFF Research Database (Denmark)
Gibbon, Timothy Braidwood; Yu, Xianbin; Tafur Monroy, Idelfonso
2009-01-01
We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal.......We propose the novel generation of photonic ultra-wideband signals using an uncooled DFB laser. For the first time we experimentally demonstrate bit-for-bit DSP BER measurements for transmission of a 781.25 Mbit/s photonic UWB signal....
Directory of Open Access Journals (Sweden)
M. Litt
2015-08-01
Full Text Available Over glaciers in the outer tropics, during the dry winter season, turbulent fluxes are an important sink of melt energy due to high sublimation rates, but measurements in stable surface layers in remote and complex terrains remain challenging. Eddy-covariance (EC and bulk-aerodynamic (BA methods were used to estimate surface turbulent heat fluxes of sensible (H and latent heat (LE in the ablation zone of the tropical Zongo Glacier, Bolivia (16° S, 5080 m a.s.l., from 22 July to 1 September 2007. We studied the turbulent fluxes and their associated random and systematic measurement errors under the three most frequent wind regimes. For nightly, density-driven katabatic flows, and for strong downslope flows related to large-scale forcing, H generally heats the surface (i.e. is positive, while LE cools it down (i.e. is negative. On average, both fluxes exhibit similar magnitudes and cancel each other out. Most energy losses through turbulence occur for daytime upslope flows, when H is weak due to small temperature gradients and LE is strongly negative due to very dry air. Mean random errors of the BA method (6 % on net H + LE fluxes originated mainly from large uncertainties in roughness lengths. For EC fluxes, mean random errors were due mainly to poor statistical sampling of large-scale outer-layer eddies (12 %. The BA method is highly sensitive to the method used to derive surface temperature from longwave radiation measurements and underestimates fluxes due to vertical flux divergence at low heights and nonstationarity of turbulent flow. The EC method also probably underestimates the fluxes, albeit to a lesser extent, due to underestimation of vertical wind speed and to vertical flux divergence. For both methods, when H and LE compensate each other in downslope fluxes, biases tend to cancel each other out or remain small. When the net turbulent fluxes (H + LE are the largest in upslope flows, nonstationarity effects and underestimations of the
Chen, Y.; Boucherie, Richardus J.; Goseling, Jasper
2016-01-01
We consider homogeneous random walks in the quarter-plane. The necessary conditions which characterize random walks of which the invariant measure is a sum of geometric terms are provided in Chen et al. (arXiv:1304.3316, 2013, Probab Eng Informational Sci 29(02):233–251, 2015). Based on these
DEFF Research Database (Denmark)
Jentzsch, G.; Knudsen, Per; Ramatschi, M.
2000-01-01
observations. Near the coast ocean tidal loading causes additional vertical deformations in the order of 1 to 10 cm Therefore, tidal gravity measurements were carried out at four fiducial sites around Greenland in order to provide corrections for the kinematic part of the coordinates of these sites. Starting...
van de Ridder, Bert; Hakvoort, Wouter; van Dijk, Johannes; Lötters, Joost Conrad; de Boer, Andries; Dimitrovova, Z.; de Almeida, J.R.
2013-01-01
In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how
da Cunha, Antonio Ribeiro
2015-05-01
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Dowdell, S; Tyler, M; McNamara, J; Sloan, K; Ceylan, A; Rinks, A
2016-11-15
Plane-parallel ionisation chambers are regularly used to conduct relative dosimetry measurements for therapeutic kilovoltage beams during commissioning and routine quality assurance. This paper presents the first quantification of the polarity effect in kilovoltage photon beams for two types of commercially available plane-parallel ionisation chambers used for such measurements. Measurements were performed at various depths along the central axis in a solid water phantom and for different field sizes at 2 cm depth to determine the polarity effect for PTW Advanced Markus and Roos ionisation chambers (PTW-Freiburg, Germany). Data was acquired for kilovoltage beams between 100 kVp (half-value layer (HVL) = 2.88 mm Al) and 250 kVp (HVL = 2.12 mm Cu) and field sizes of 3-15 cm diameter for 30 cm focus-source distance (FSD) and 4 × 4 cm2-20 × 20 cm2 for 50 cm FSD. Substantial polarity effects, up to 9.6%, were observed for the Advanced Markus chamber compared to a maximum 0.5% for the Roos chamber. The magnitude of the polarity effect was observed to increase with field size and beam energy but was consistent with depth. The polarity effect is directly influenced by chamber design, with potentially large polarity effects for some plane-parallel ionisation chambers. Depending on the specific chamber used, polarity corrections may be required for output factor measurements of kilovoltage photon beams. Failure to account for polarity effects could lead to an incorrect dose being delivered to the patient.
Directory of Open Access Journals (Sweden)
Adriana VÂLCU
2011-09-01
Full Text Available Based on the reference document, the article proposes the way to calculate the errors of indication and associated measurement uncertainties, by resorting to the general information provided by the calibration certificate of a balance (non-automatic weighing instruments, shortly NAWI used in medical field. The paper may be also considered a useful guideline for: operators working in laboratories accredited in medical (or other various fields where the weighing operations are part of their testing activities; test houses, laboratories, or manufacturers using calibrated non-automatic weighing instruments for measurements relevant for the quality of production subject to QM requirements (e.g. ISO 9000 series, ISO 10012, ISO/IEC 17025; bodies accrediting laboratories; accredited laboratories for the calibration of NAWI. Article refers only to electronic weighing instruments having maximum capacity up to 30 kg. Starting from the results provided by a calibration certificate it is presented an example of calculation.
Energy Technology Data Exchange (ETDEWEB)
Beddo, M.E.; Spinka, H.; Underwood, D.G.
1992-08-14
Studies of inclusive direct-{gamma} production by pp interactions at RHIC energies were performed. Rates and the associated uncertainties on spin-spin observables for this process were computed for the planned PHENIX and STAR detectors at energies between {radical}s = 50 and 500 GeV. Also, rates were computed for direct-{gamma} + jet production for the STAR detector. The goal was to study the gluon spin distribution functions with such measurements. Recommendations concerning the electromagnetic calorimeter design and the need for an endcap calorimeter for STAR are made.
Directory of Open Access Journals (Sweden)
Talita Adão Perini
2005-02-01
precisas. El propósito del presente estudio es el de difundir la estrategia para la obtención del error técnico de medición (ETM, siguiendo la metodología de Kevin Norton y Tim Olds (2000 y evaluar el desempeño de empleados de laboratorio. Tres antropometristas del Laboratorio de Fisiologia del Ejercicio (Labofise de la Universidad del Brasil fueron evaluados. Ellos realizaron las medidas de pliegues cutáneos (Cescorf, 0.1mm en nueve diferentes puntos antropométricos de 35 voluntarios (25,45 ± 9,96 años. Para las medidas, fue adoptada la padronización de la International Society for Advancement in Kinanthropometry (ISAK. Para la verificación del ETM intra-evaluador, las medidas fueron realizadas en los mismos voluntarios en dos días diferentes; y, para la obtención del ETM inter-avaliador, las medidas fueron hechas en un mismo grupo de voluntarios, en el mesmo dia, por los tres antropometristas. Los resultados apuntaron ETMs no aceptables apenas para dos evaluadores en el análisis intra-evaluador. Los demás ETMs alcanzaron resultados aceptables. Los ETMs no aceptables demostraron la necesidad de entrenamiento técnico de los antropometristas, de modo de minimizar la variabilidad constatada.The anthropometrical measurements have been widely utilized to follow children's development, in the verification of the adaptations to the physical training in the athletes' selection, in studies of ethnic characterization, among others. The control of the precision and accuracy of the measurements will result in more reliable data. The objective of the present study was to diffuse the strategies to compute the technical error of measurement (TEM according to Kevin Norton's and Tim Olds methodology (2000 and to analyze the laboratory' trainees performance. Three beginner observers (anthropometrists of the Exercise Physiology Laboratory (Labofise of the University of Brazil were analyzed. They accomplished measures of skin folds thickness (Cescorf, 0.1 mm in nine
Koziej, Mateusz; Trybus, Marek; Mydłowska, Anna; Sałapa, Kinga; Gniadek, Maksymilian; Banach, Marta; Brudnicki, Jarosław
2018-02-01
The aims of this study were to translate the Michigan Hand Outcomes Questionnaire into the Polish language and to test the measurement properties of its quality criteria. A total of 120 patients with hand complaints completed the Polish Michigan Hand Outcomes Questionnaire and the Disabilities of the Arm, Shoulder, and Hand questionnaire on the first assessment, along with the grip test, pinch test, and pain sore assessed using a visual analogue scale during activity. After 7 days, 76 patients completed the Michigan Hand Outcomes Questionnaire the second time. The Cronbach alpha of the Michigan Hand Outcomes Questionnaire subscales ranged from 0.79 to 0.96. The intraclass correlation coefficient varied from 0.82-0.97, and the Bland-Altman method indicated the Michigan Hand Outcomes Questionnaire total score limit of agreement was -13.2-12.3 and -9.18-9.62 for the right and left hand, respectively. The construct validity revealed a moderate to strong correlation between every subscale of the Polish Michigan Hand Outcomes Questionnaire and Disabilities of the Arm, Shoulder, and Hand, but they only correlated with the grip test and the visual analogue scale, and neither correlated with the pinch test. The study demonstrated properties similar to the original version, validating the belief that the use of this questionnaire in medical practice in Poland is justified.
Simondon, F; Khodja, H
1999-02-01
The measure of efficacy is optimally performed by randomized controlled trials. However, low specificity of the judgement criteria is known to bias toward lower estimation, while low sensitivity increases the required sample size. A common technique for ensuring good specificity without a drop in sensitivity is to use several diagnostic tests in parallel, with each of them being specific. This approach is similar to the more general situation of case-counting from multiple data sources, and this paper explores the application of the capture-recapture method for the analysis of the estimates of efficacy. An illustration of this application is derived from a study on the efficacy of pertussis vaccines where the outcome was based on > or =21 days of cough confirmed by at least one of three criteria performed independently for each subject: bacteriology, serology, or epidemiological link. Log-linear methods were applied to these data considered as three sources of information. The best model considered the three simple effects and an interaction term between bacteriology and epidemiological linkage. Among the 801 children experiencing > or =21 days of cough, it was estimated that 93 cases were missed, leading to a corrected total of 413 confirmed cases. The relative vaccine efficacy estimated from the same model was 1.50 (95% confidence interval: 1.24-1.82), similar to the crude estimate of 1.59 and confirming better protection afforded by one of the two vaccines. This method allows supporting analysis to interpret primary estimates of vaccine efficacy.