WorldWideScience

Sample records for bivariate measurement error

  1. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  2. Recursively determined representing measures for bivariate truncated moment sequences

    CERN Document Server

    Curto, Raul E

    2012-01-01

    A theorem of Bayer and Teichmann implies that if a finite real multisequence \\beta = \\beta^(2d) has a representing measure, then the associated moment matrix M_d admits positive, recursively generated moment matrix extensions M_(d+1), M_(d+2),... For a bivariate recursively determinate M_d, we show that the existence of positive, recursively generated extensions M_(d+1),...,M_(2d-1) is sufficient for a measure. Examples illustrate that all of these extensions may be required to show that \\beta has a measure. We describe in detail a constructive procedure for determining whether such extensions exist. Under mild additional hypotheses, we show that M_d admits an extension M_(d+1) which has many of the properties of a positive, recursively generated extension.

  3. Error Estimates of Fitting for Bivariate Fractal Interpolation%二元分形插值的拟合误差估计

    Institute of Scientific and Technical Information of China (English)

    王宏勇

    2009-01-01

    A given bivariate continuous function is fitted by using a bivariate fractal interpolation function, and the error of fitting is studied in this paper. The results of error estimates are obtained in two metric cases. This provides a theoretical basis for the algorithms of fractal surface reconstruction.

  4. Characterizations of bivariate models using dynamic Kullbak-Leibler discrimination measures

    OpenAIRE

    Navarro, J.; S. M. Sunoj; Linu, M.N.

    2011-01-01

    Abstract In this paper the residual Kullback-Leibler discrimination information measure is extended to conditionally specified models. The extension is used to characterize some bivariate distributions. These distributions are also characterized in terms of proportional hazard rate models and weighted distributions. Moreover, we also obtain some bounds for this dynamic discrimination function by using the likelihood ratio order and some preceding results. correspondence: ...

  5. Measuring early or late dependence for bivariate lifetimes of twins

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus K; Hjelmborg, Jacob B

    2015-01-01

    We consider data from the Danish twin registry and aim to study in detail how lifetimes for twin-pairs are correlated. We consider models where we specify the marginals using a regression structure, here Cox's regression model or the additive hazards model. The best known such model is the Clayton...... procedures are applied to Danish twin data to describe dependence in the lifetimes of the twins. Here we show that the early deaths are more correlated than the later deaths, and by comparing MZ and DZ associations we suggest that early deaths might be more driven by genetic factors. This conclusion requires...... models that are able to look at more local dependence measures. We further show that the dependence differs for MZ and DZ twins and appears to be the same for males and females, and that there are indications that the dependence increases over calendar time....

  6. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  7. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  8. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  9. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  10. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  11. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  12. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  13. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  14. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  15. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  16. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  17. Bivariate analysis of basal serum anti-Mullerian hormone measurements and human blastocyst development after IVF

    LENUS (Irish Health Repository)

    Sills, E Scott

    2011-12-02

    Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  18. POTASSIUM MEASUREMENT: CAUSES OF ERRORS IN MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Kavitha

    2014-07-01

    Full Text Available It is not a easy task to recognize the errors in potassium measurement in the lab. Falsely elevated potassium levels if goes unrecognized by the lab and clinician, it is difficult to treat masked hypokalemic state, which is again a medical emergency. Such cases require proper monitoring by the clinician, so that cases with such history of pseudohyperkalemia which cannot be easily identified in the laboratory should not go unrecognized by clinician. The aim of this article is to discuss the causes and mechanisms of spuriously elevated potassium and minimize the factors causing pseudohyperkalemia. Literature search performed on pubmed using terms “pseudohyperkalemia”, “spurious hyperkalemia”, “and masked hyperkalemia”, “reverse pseudohyperkalemia”, “factitious hyperkalemia”.

  19. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  20. Assigning error to an M2 measurement

    Science.gov (United States)

    Ross, T. Sean

    2006-02-01

    The ISO 11146:1999 standard has been published for 6 years and set forth the proper way to measure the M2 parameter. In spite of the strong experimental guidance given by this standard and the many commercial devices based upon ISO 11146, it is still the custom to quote M2 measurements without any reference to significant figures or error estimation. To the author's knowledge, no commercial M2 measurement device includes error estimation. There exists, perhaps, a false belief that M2 numbers are high precision and of insignificant error. This paradigm causes program managers and purchasers to over-specify a beam quality parameter and researchers not to question the accuracy and precision of their M2 measurements. This paper will examine the experimental sources of error in an M2 measurement including discretization error, CCD noise, discrete filter sets, noise equivalent aperture estimation, laser fluctuation and curve fitting error. These sources of error will be explained in their experimental context and convenient formula given to properly estimate error in a given M2 measurement. This work is the result of the author's inability to find error estimation and disclosure of methods in commercial beam quality measurement devices and building an ISO 11146 compliant, computer- automated M2 measurement device and the resulting lessons learned and concepts developed.

  1. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species....... This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x......-measurements is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...

  2. Prediction with measurement errors in finite populations.

    Science.gov (United States)

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2012-02-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.

  3. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  4. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  5. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  6. Gear Transmission Error Measurement System Made Operational

    Science.gov (United States)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  7. Errors of measurement by laser goniometer

    Science.gov (United States)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  8. Statistical error analysis of reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Thammaluckan, Sithisak; Hah, Chang Joo [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    After statistical analysis, it was confirmed that each group were sampled from same population. It is observed in Table 7 that the mean error decreases as core size increases. Application of bias factor obtained from this research reduces mean error further. The point kinetic model had been used to measure control rod worth without 3D spatial information of neutron flux or power distribution, which causes inaccurate result. Dynamic Control rod Reactivity Measurement (DCRM) was employed to take into account of 3D spatial information of flux in the point kinetics model. The measured bank worth probably contains some uncertainty such as methodology uncertainty and measurement uncertainty. Those uncertainties may varies with size of core and magnitude of reactivity. The goal of this research is to investigate the effect of core size and magnitude of control rod worth on the error of reactivity measurement using statistics.

  9. Neutron multiplication error in TRU waste measurements

    Energy Technology Data Exchange (ETDEWEB)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  10. Algorithmic Error Correction of Impedance Measuring Sensors

    Directory of Open Access Journals (Sweden)

    Vira Tyrsa

    2009-12-01

    Full Text Available This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance.

  11. Algorithmic Error Correction of Impedance Measuring Sensors

    Science.gov (United States)

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  12. New Gear Transmission Error Measurement System Designed

    Science.gov (United States)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  13. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  14. Ordinal Bivariate Inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and...

  15. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  16. Bivariate value-at-risk

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2007-10-01

    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  17. Measurement error in longitudinal film badge data

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, J.L

    2002-04-01

    The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study

  18. Measurement Error and Misclassification in Statistics and

    CERN Document Server

    Gustafson, Paul

    2003-01-01

    This book addresses statistical challenges posed by inaccurately measuring explanatory variables, a common problem in biostatistics and epidemiology. The author explores both measurement error in continuous variables and misclassification in categorical variables. He also describes the circumstances in which it is necessary to explicitly adjust for imprecise covariates using the Bayesian approach and a Markov chain Monte Carlo algorithm. The book offers a mix of basic and more specialized topics such as ...wrong-model... fitting. Mathematical details are featured in the final sections of each

  19. Laser measurement and analysis of reposition error in polishing systems

    Science.gov (United States)

    Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying

    2015-10-01

    In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.

  20. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  1. Linear approximation for measurement errors in phase shifting interferometry

    Science.gov (United States)

    van Wingerden, Johannes; Frankena, Hans J.; Smorenburg, Cornelis

    1991-07-01

    This paper shows how measurement errors in phase shifting interferometry (PSI) can be described to a high degree of accuracy in a linear approximation. System error sources considered here are light source instability, imperfect reference phase shifting, mechanical vibrations, nonlinearity of the detector, and quantization of the detector signal. The measurement inaccuracies resulting from these errors are calculated in linear approximation for several formulas commonly used for PSI. The results are presented in tables for easy calculation of the measurement error magnitudes for known system errors. In addition, this paper discusses the measurement error reduction which can be achieved by choosing an appropriate phase calculation formula.

  2. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  3. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  4. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  5. Bivariate analysis of basal serum anti-Müllerian hormone measurements and human blastocyst development after IVF

    Directory of Open Access Journals (Sweden)

    Sills E Scott

    2011-12-01

    Full Text Available Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79 and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+/- SD age for study patients in this study group was 36.3 ± 4.0 (range = 28-45 yrs, and mean (+/- SD terminal serum estradiol during IVF was 5929 +/- 4056 pmol/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65 was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44 was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61, suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol/l; p = 0.029. Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  6. Median Unbiased Estimation of Bivariate Predictive Regression Models with Heavy-tailed or Heteroscedastic Errors%具有重尾或异方差误差的双变量预测回归模型的中位无偏估计

    Institute of Scientific and Technical Information of China (English)

    朱复康; 王德军

    2007-01-01

    In this paper, we consider median unbiased estimation of bivariate predictive regression models with non-normal, heavy-tailed or heterescedastic errors. We construct confidence intervals and median unbiased estimator for the parameter of interest. We show that the proposed estimator has better predictive potential than the usual least squares estimator via simulation. An empirical application to finance is given. And a possible extension of the estimation procedure to cointegration models is also described.

  7. Rapid mapping of volumetric machine errors using distance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  8. Filtered kriging for spatial data with heterogeneous measurement error variances.

    Science.gov (United States)

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  9. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS.

    Energy Technology Data Exchange (ETDEWEB)

    CARDONA,J.; PEGGS,S.; PILAT,R.; PTITSYN,V.

    2004-07-05

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model.

  10. Statistical Modeling of Bivariate Data.

    Science.gov (United States)

    1982-08-01

    end identify by lock nsum br) joint density-quantile function, dependence-density, non-parametric bivariate density estimation, entropy , exponential...estimated, by autoregressive or exponential model estimators I with maximum entropy properties, is investigated in this thesis. The results provide...important and useful procedures for nonparametric bivariate density estimation. The thesis discusses estimators of the entropy H(d) of ul2) which seem to me

  11. Quantitating error in blood flow measurements with radioactive microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Austin, R.E. Jr.; Hauck, W.W.; Aldea, G.S.; Flynn, A.E.; Coggins, D.L.; Hoffman, J.I.

    1989-07-01

    Accurate determination of the reproducibility of measurements using the microsphere technique is important in assessing differences in blood flow to different organs or regions within organs, as well as changes in perfusion under various experimental conditions. The sources of error of the technique are briefly reviewed. In addition, we derived a method for combining quantifiable sources of error into a single estimate that was evaluated experimentally by simultaneously injecting eight or nine sets of microspheres (each with a different radionuclide label) into four anesthetized dogs. Each nuclide was used to calculate blood flow in 145-190 myocardial regions. We compared each flow determination (using a single nuclide label) with a weighted mean for the piece (based on the remaining nuclides). The difference was defined as ''measured'' error. In all, there were a total of 5,975 flow observations. We compared measured error with theoretical estimates based on the Poisson error of radioactive disintegration and microsphere entrapment, nuclide separation error, and reference flow error. We found that combined estimates based on these sources completely accounted for measured error in the relative distribution of microspheres. In addition, our estimates of the error in measuring absolute flows (which were established using microsphere reference samples) slightly, but significantly, underestimated measured error in absolute flow.

  12. Error analysis for a laser differential confocal radius measurement system.

    Science.gov (United States)

    Wang, Xu; Qiu, Lirong; Zhao, Weiqian; Xiao, Yang; Wang, Zhongyu

    2015-02-10

    In order to further improve the measurement accuracy of the laser differential confocal radius measurement system (DCRMS) developed previously, a DCRMS error compensation model is established for the error sources, including laser source offset, test sphere position adjustment offset, test sphere figure, and motion error, based on analyzing the influences of these errors on the measurement accuracy of radius of curvature. Theoretical analyses and experiments indicate that the expanded uncertainty of the DCRMS is reduced to U=0.13  μm+0.9  ppm·R (k=2) through the error compensation model. The error analysis and compensation model established in this study can provide the theoretical foundation for improving the measurement accuracy of the DCRMS.

  13. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  14. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  15. ERROR COMPENSATION OF COORDINATE MEASURING MACHINES WITH LOW STIFFNESS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A technique for compensating the errors of coordinate measuring machines (CMMs) with low stiffness is proposed. Some additional it ems related with the force deformation are introduced to the error compensation equations. The research was carried on a moving column horizontal arm CMM. Experimental results show that both the effects of systematic components of error motions and force deformations are greatly reduced, which shows the effectiveness o proposed technique.

  16. System Measures Errors Between Time-Code Signals

    Science.gov (United States)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  17. Contouring error compensation on a micro coordinate measuring machine

    Science.gov (United States)

    Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan

    2011-12-01

    In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.

  18. Correlated measurement error hampers association network inference

    NARCIS (Netherlands)

    Kaduk, M.; Hoefsloot, H.C.J.; Vis, D.J.; Reijmers, T.; Greef, J. van der; Smilde, A.K.; Hendriks, M.M.W.B.

    2014-01-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlyin

  19. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  20. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  1. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    Science.gov (United States)

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  2. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  3. Dyadic Bivariate Wavelet Multipliers in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhong Yan LI; Xian Liang SHI

    2011-01-01

    The single 2 dilation wavelet multipliers in one-dimensional case and single A-dilation (where A is any expansive matrix with integer entries and |detA|=2)wavelet multipliers in twodimensional case were completely characterized by Wutam Consortium(1998)and Li Z.,et al.(2010).But there exist no results on multivariate wavelet multipliers corresponding to integer expansive dilation.matrix with the absolute value of determinant not 2 in L2(R2).In this paper,we choose 2I2=(0202)as the dilation matrix and consider the 2I2-dilation multivariate wavelet Ψ={ψ1,ψ2,ψ3}(which is called a dyadic bivariate wavelet)multipliers.Here we call a measurable function family f={f1,f2,f3}a dyadic bivariate wavelet multiplier if Ψ1={F-1(f1ψ1),F-1(f2ψ2),F-1(f3ψ3)} is a dyadic bivariate wavelet for any dyadic bivariate wavelet Ψ={ψ1,ψ2,ψ3},where(f)and,F-1 denote the Fourier transform and the inverse transform of function f respectively.We study dyadic bivariate wavelet multipliers,and give some conditions for dyadic bivariate wavelet multipliers.We also give concrete forms of linear phases of dyadic MRA bivariate wavelets.

  4. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  5. ASSESSING THE DYNAMIC ERRORS OF COORDINATE MEASURING MACHINES

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The main factors affecting the dynamic errors of coordinate measuring machines are analyzed. It is pointed out that there are two main contributors to the dynamic errors: One is the rotation of the elements around the joints connected with air bearings and the other is the bending of the elements caused by the dynamic inertial forces. A method for obtaining the displacement errors at the probe position from dynamic rotational errors is presented. The dynamic rotational errors are measured with inductive position sensors and a laser interferometer. The theoretical and experimental results both show that during the process of fast probing, due to the dynamic inertial forces, there are not only large rotation of the elements around the joints connected with air bearings but also large bending of the weak elements themselves.

  6. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  7. Bivariate Blending Thiele-Werner's Osculatory Rational Interpolation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Both the expansive Newton's interpolating polynomial and the Thiele-Werner's interpolation are used to construct a kind of bivariate blending Thiele-Werner's osculatory rational interpolation. A recursive algorithm and its characteristic properties are given. An error estimation is obtained and a numerical example is illustrated.

  8. Aerial measurement error with a dot planimeter: Some experimental estimates

    Science.gov (United States)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  9. Measuring worst-case errors in a robot workcell

    Energy Technology Data Exchange (ETDEWEB)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  10. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  11. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  12. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  13. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Directory of Open Access Journals (Sweden)

    David Ayllón

    Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  14. Efficient measurement of quantum gate error by interleaved randomized benchmarking.

    Science.gov (United States)

    Magesan, Easwar; Gambetta, Jay M; Johnson, B R; Ryan, Colm A; Chow, Jerry M; Merkel, Seth T; da Silva, Marcus P; Keefe, George A; Rothwell, Mary B; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-08-24

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates X(π/2) and Y(π/2). These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  15. Corneal topography measurement by means of radial shearing interference: Part III - measurement errors

    Science.gov (United States)

    Kowalik, Waldemar W.; Garncarz, Beata E.; Kasprzak, Henryk T.

    This work contains results of computer simulation researches, which define requirements for measurement conditions, which should be fulfilled so that measurement results ensure allowable errors. They define: allowable measurement errors (interferogram's scanning) and conditions, which should fulfill computer programs, so that errors introduced by mathematical operations and computer are the smallest.

  16. Measurement uncertainty evaluation of conicity error inspected on CMM

    Science.gov (United States)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  17. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  18. Influence of measurement errors and estimated parameters on combustion diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Payri, F.; Molina, S.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n. 46022 Valencia (Spain); Armas, O. [Departamento de Mecanica Aplicada e Ingenieria de proyectos, Universidad de Castilla-La Mancha. Av. Camilo Jose Cela s/n 13071,Ciudad Real (Spain)

    2006-02-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors. (author)

  19. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  20. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  1. Phase measurement error in summation of electron holography series

    Energy Technology Data Exchange (ETDEWEB)

    McLeod, Robert A., E-mail: robbmcleod@gmail.com [Department of Physics, University of Alberta, Edmonton, AB, Canada T6G 2E1 (Canada); National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Bergen, Michael [National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Malac, Marek [National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, AB, Canada T6G 2E1 (Canada)

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS{sub 2} fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and −5% in the vacuum, indicating that the model can provide reliable quantitative predictions. - Highlights: • Optimization of electro-optical configuration for double biprism holography. • Model of drift

  2. Measurement errors with low-cost citizen science radiometers

    OpenAIRE

    Bardají, Raúl; Piera, Jaume

    2016-01-01

    The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed

  3. Measurement error of waist circumference: gaps in knowledge.

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; Mechelen, W. van

    2013-01-01

    Objective: It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design: To ident

  4. Measurement error of waist circumference: Gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To identif

  5. ALGORITHM FOR SPHERICITY ERROR AND THE NUMBER OF MEASURED POINTS

    Institute of Scientific and Technical Information of China (English)

    HE Gaiyun; WANG Taiyong; ZHAO Jian; YU Baoqin; LI Guoqin

    2006-01-01

    The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution,whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.

  6. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  7. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated measurem

  8. Automated High Resolution Measurement of Heliostat Slope Errors

    OpenAIRE

    Ulmer, Steffen; März, Tobias; Reinalter, Wolfgang; Belhomme, Boris

    2010-01-01

    A new optical measurement method that simplifies and optimizes the mounting and canting of heliostats and helps to assure their optical quality before commissioning of the solar field was developed. This method is based on the reflection of regular patterns in the mirror surface and their distortions due to mirror surface errors. The measurement has a resolution of about one million points per heliostat with a measurement uncertainty of less than 0.2 mrad and a measurement time of about one m...

  9. AUTOMATED HIGH RESOLUTION MEASUREMENT OF HELIOSTAT SLOPE ERRORS

    OpenAIRE

    Ulmer, Steffen; März, Tobias; Prahl, Christoph; Reinalter, Wolfgang; Belhomme, Boris

    2009-01-01

    A new optical measurement method that simplifies and optimizes the mounting and canting of heliostats and helps to assure their optical quality before commissioning of the solar field was developed. This method is based on the reflection of regular patterns in the mirror surface and their distortions due to mirror surface errors. The measurement has a resolution of about one million points per heliostat with a measurement uncertainty of less than 0.2 mrad and a measurement time of about one m...

  10. Errors Associated With Excess Air Multipoint Measurement Systems

    Directory of Open Access Journals (Sweden)

    Ramsunkar Charlene

    2015-12-01

    Full Text Available Boiler combustion air is generally controlled by the excess air content measured at the boiler economiser outlet using oxygen (O2 analysers. Due to duct geometry and dimensions, areas of high and low O2 concentrations in the flue gas duct occur, which poses a problem in obtaining a representative measurement of O2 in the flue gas stream. Multipoint systems as opposed to single point systems are more favourable to achieve representative readings. However, ash blockages and air leakages influence the accuracy of O2 measurement. The design of multipoint system varies across ESKOMs’ Power Stations. This research was aimed at evaluating the accuracy of the multipoint oxygen measurement system installed at Power Station A and to determine the systematic errors associated with different multipoint systems designs installed at Power Stations' A and B. Using flow simulation software, FloEFDTM and Flownex®, studies were conducted on two types of multipoint system designs This study established that significantly large errors, as high as 50%, were noted between the actual and measured flue gas O2. The design of the multipoint system extraction pipes also introduces significant errors, as high as 23%, in the O2 measured. The results indicated that the sampling errors introduced with Power Station A’s system can be significantly reduced by adopting the sampling pipe design installed at Power Station B.

  11. Error-disturbance uncertainty relations in neutron spin measurements

    Science.gov (United States)

    Sponar, Stephan

    2016-05-01

    Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.

  12. Non-Gaussian error distribution of 7Li abundance measurements

    Science.gov (United States)

    Crandall, Sara; Houston, Stephen; Ratra, Bharat

    2015-07-01

    We construct the error distribution of 7Li abundance measurements for 66 observations (with error bars) used by Spite et al. (2012) that give A(Li) = 2.21 ± 0.065 (median and 1σ symmetrized error). This error distribution is somewhat non-Gaussian, with larger probability in the tails than is predicted by a Gaussian distribution. The 95.4% confidence limits are 3.0σ in terms of the quoted errors. We fit the data to four commonly used distributions: Gaussian, Cauchy, Student’s t and double exponential with the center of the distribution found with both weighted mean and median statistics. It is reasonably well described by a widened n = 8 Student’s t distribution. Assuming Gaussianity, the observed A(Li) is 6.5σ away from that expected from standard Big Bang Nucleosynthesis (BBN) given the Planck observations. Accounting for the non-Gaussianity of the observed A(Li) error distribution reduces the discrepancy to 4.9σ, which is still significant.

  13. Optimal measurement strategies for effective suppression of drift errors

    Energy Technology Data Exchange (ETDEWEB)

    Yashchuk, Valeriy V.

    2009-04-16

    Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.

  14. The effect of measurement error on surveillance metrics

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Brian Phillip [Los Alamos National Laboratory; Hamada, Michael S. [Los Alamos National Laboratory

    2012-04-24

    The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.

  15. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  16. Confounding and exposure measurement error in air pollution epidemiology

    NARCIS (Netherlands)

    Sheppard, L.; Burnett, R.T.; Szpiro, A.A.; Kim, J.Y.; Jerrett, M.; Pope, C.; Brunekreef, B.

    2012-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. Th

  17. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    Science.gov (United States)

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  18. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  19. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serious...... impact on the results when the instrumental measurements are used for multivariate regression and prediction. This paper gives examples of how errors influencing the predictions obtained by a multivariate regression model can be quantified and handled. Only random errors are considered here, while...... in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  20. Measurement error adjustment in essential fatty acid intake from a food frequency questionnaire: alternative approaches and methods

    Directory of Open Access Journals (Sweden)

    Satia Jessie A

    2007-09-01

    Full Text Available Abstract Background We aimed at assessing the degree of measurement error in essential fatty acid intakes from a food frequency questionnaire and the impact of correcting for such an error on precision and bias of odds ratios in logistic models. To assess these impacts, and for illustrative purposes, alternative approaches and methods were used with the binary outcome of cognitive decline in verbal fluency. Methods Using the Atherosclerosis Risk in Communities (ARIC study, we conducted a sensitivity analysis. The error-prone exposure – visit 1 fatty acid intake (1987–89 – was available for 7,814 subjects 50 years or older at baseline with complete data on cognitive decline between visits 2 (1990–92 and 4 (1996–98. Our binary outcome of interest was clinically significant decline in verbal fluency. Point estimates and 95% confidence intervals were compared between naïve and measurement-error adjusted odds ratios of decline with every SD increase in fatty acid intake as % of energy. Two approaches were explored for adjustment: (A External validation against biomarkers (plasma fatty acids in cholesteryl esters and phospholipids and (B Internal repeat measurements at visits 2 and 3. The main difference between the two is that Approach B makes a stronger assumption regarding lack of error correlations in the structural model. Additionally, we compared results from regression calibration (RCAL to those from simulation extrapolation (SIMEX. Finally, using structural equations modeling, we estimated attenuation factors associated with each dietary exposure to assess degree of measurement error in a bivariate scenario for regression calibration of logistic regression model. Results and conclusion Attenuation factors for Approach A were smaller than B, suggesting a larger amount of measurement error in the dietary exposure. Replicate measures (Approach B unlike concentration biomarkers (Approach A may lead to imprecise odds ratios due to larger

  1. Numerical Integration Based on Bivariate Quartic Quasi-Interpolation Operators

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we propose a method to deal with numerical integral by using two kinds of C2 quasi-interpolation operators on the bivariate spline space, and also discuss the convergence properties and error estimates. Moreover, the proposed method is applied to the numerical evaluation of 2-D singular integrals. Numerical experiments will be carried out and the results will be compared with some previously published results.

  2. GLOBAL SMOOTHNESS PRESERVATION BY BIVARIATE INTERPOLATION OPERATORS

    Institute of Scientific and Technical Information of China (English)

    S.G.Gal; J.Szabados

    2003-01-01

    Extending the results of [4] in the univariate case, in this paper we prove that the bivariate interpolation polynomials of Hermite-Fejer based on the Chebyshev nodes of the first kind, those of Lagrange based on the Chebyshev nodes of second kind and ± 1, and those of bivariate Shepard operators, have the property of partial preservation of global smoothness, with respect to various bivariate moduli of continuity.

  3. New time-domain three-point error separation methods for measurement roundness and spindle error motion

    Science.gov (United States)

    Liu, Wenwen; Tao, Tingting; Zeng, Hao

    2016-10-01

    Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.

  4. Calibrating Car-Following Model Considering Measurement Errors

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2013-01-01

    Full Text Available Car-following model has important applications in traffic and safety engineering. To enhance the accuracy of model in predicting behavior of individual driver, considerable studies strive to improve the model calibration technologies. However, microscopic car-following models are generally calibrated by using macroscopic traffic data ignoring measurement errors-in-variables that leads to unreliable and erroneous conclusions. This paper aims to develop a technology to calibrate the well-known Van Aerde model. Particularly, the effect of measurement errors-in-variables on the accuracy of estimate is considered. In order to complete calibration of the model using microscopic data, a new parameter estimate method named two-step approach is proposed. The result shows that the modified Van Aerde model to a certain extent is more reliable than the generic model.

  5. Statistical Test for Bivariate Uniformity

    Directory of Open Access Journals (Sweden)

    Zhenmin Chen

    2014-01-01

    Full Text Available The purpose of the multidimension uniformity test is to check whether the underlying probability distribution of a multidimensional population differs from the multidimensional uniform distribution. The multidimensional uniformity test has applications in various fields such as biology, astronomy, and computer science. Such a test, however, has received less attention in the literature compared with the univariate case. A new test statistic for checking multidimensional uniformity is proposed in this paper. Some important properties of the proposed test statistic are discussed. As a special case, the bivariate statistic test is discussed in detail in this paper. The Monte Carlo simulation is used to compare the power of the newly proposed test with the distance-to-boundary test, which is a recently published statistical test for multidimensional uniformity. It has been shown that the test proposed in this paper is more powerful than the distance-to-boundary test in some cases.

  6. Distance Measurement Error Reduction Analysis for the Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Tariq Jamil SaifullahKhanzada

    2012-10-01

    Full Text Available This paper presents the DME (Distance Measurement Error estimation analysis for the wireless indoor positioning channel. The channel model for indoor positioning is derived and implemented using 8 WLAN (Wireless Local Area Network antennas system compliant to IEEE 802.11 a/b/g standard. Channel impairments are derived for the TDOA (Time Difference of Arrival range estimation. DME calculation is performed over distinct experiments in the TDOA channel profiles using 1,2,4 and 8 antennas deployed system. Analysis for the DME for different antennas is presented. The spiral antenna achieves minimum DME in the range of 1m. Data demographics scattering for the error spread in TDOA channel profile is analyzed to show the error behavior. The effect of increase in number of recordings on DME is shown by the results. Transmitter antennas behavior for DME and their standard deviations are depicted through the results, which minimize the error floor to less than 1 m. This reduction is not achieved in the literature to the best of our knowledge.

  7. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    Science.gov (United States)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  8. Effects of vibration measurement error on remote sensing image restoration

    Science.gov (United States)

    Sun, Xuan; Wei, Zhang; Zhi, Xiyang

    2016-10-01

    Satellite vibrations would lead to image motion blur. Since the vibration isolators cannot fully suppress the influence of vibrations, image restoration methods are usually adopted, and the vibration characteristics of imaging system are usually required as algorithm inputs for better restoration results, making the vibration measurement error strongly connected to the final outcome. If the measurement error surpass a certain range, the restoration may not be implemented successfully. Therefore it is important to test the applicable scope of restoration algorithms and control the vibrations within the range, on the other hand, if the algorithm is robust, then the requirements for both vibration isolator and vibration detector can be lowered and thus less financial cost is needed. In this paper, vibration-induced degradation is first analyzed, based on which the effects of measurement error on image restoration are further analyzed. The vibration-induced degradation is simulated using high resolution satellite images and then the applicable working condition of typical restoration algorithms are tested with simulation experiments accordingly. The research carried out in this paper provides a valuable reference for future satellite design which plan to implement restoration algorithms.

  9. Measurement error in CT assessment of appendix diameter

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2016-12-15

    Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)

  10. Motion measurement errors and autofocus in bistatic SAR.

    Science.gov (United States)

    Rigling, Brian D; Moses, Randolph L

    2006-04-01

    This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.

  11. Measurement Error Effects of Beam Parameters Determined by Beam Profiles

    CERN Document Server

    Jang, Ji-Ho; Jeon, Dong-O

    2015-01-01

    A conventional method to determine beam parameters is using the profile measurements and converting them into the values of twiss parameters and beam emittance at a specified position. The beam information can be used to improve transverse beam matching between two different beam lines or accelerating structures. This work is related with the measurement error effects of the beam parameters and the optimal number of profile monitors in a section between MEBT (medium energy beam transport) and QWR (quarter wave resonator) of RAON linear accelerator.

  12. Error reduction techniques for measuring long synchrotron mirrors

    Energy Technology Data Exchange (ETDEWEB)

    Irick, S.

    1998-07-01

    Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.

  13. Quantifying soil CO2 respiration measurement error across instruments

    Science.gov (United States)

    Creelman, C. A.; Nickerson, N. R.; Risk, D. A.

    2010-12-01

    A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.

  14. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  15. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  16. Patient motion tracking in the presence of measurement errors.

    Science.gov (United States)

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  17. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...

  18. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    Directory of Open Access Journals (Sweden)

    R. M. Stauffer

    2013-08-01

    Full Text Available Several previous studies highlight pressure (or equivalently, pressure altitude discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006–2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92 are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3 mixing ratio profiles (O3MR from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km. Ozone mixing ratio errors in the 7–15 hPa layer (29–32 km, a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold. Comparisons of total column O3 yield average differences of +1.6 DU (−1.1 to +4.9 DU 10th to 90th percentiles when the O3 is integrated to burst with addition of the McPeters and Labow (2012 above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (−1.1 to +2.2 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly

  19. Examiner error in curriculum-based measurement of oral reading.

    Science.gov (United States)

    Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K

    2014-08-01

    Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.

  20. Measurements of Aperture Averaging on Bit-Error-Rate

    Science.gov (United States)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  1. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  2. Interval estimation for rank correlation coefficients based on the probit transformation with extension to measurement error correction of correlated ranked data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2007-02-10

    The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations.

  3. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  4. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  5. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  6. Sampling errors in the measurement of rain and hail parameters

    Science.gov (United States)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  7. Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure

    OpenAIRE

    Qiang Chen; Xueheng Tao; Jinshi Lu; Xuejun Wang

    2016-01-01

    On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST), which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters...

  8. Seeing-Induced Errors in Solar Doppler Velocity Measurements

    CERN Document Server

    Padinhatteeri, Sreejith; Sankarasubramanian, K; 10.1007/s11207-010-9597-1

    2010-01-01

    Imaging systems based on a narrow-band tunable filter are used to obtain Doppler velocity maps of solar features. These velocity maps are created by taking the difference between the blue- and red-wing intensity images of a chosen spectral line. This method has the inherent assumption that these two images are obtained under identical conditions. With the dynamical nature of the solar features as well as the Earth's atmosphere, systematic errors can be introduced in such measurements. In this paper, a quantitative estimate of the errors introduced due to variable seeing conditions for ground-based observations is simulated and compared with real observational data for identifying their reliability. It is shown, under such conditions, that there is a strong cross-talk from the total intensity to the velocity estimates. These spurious velocities are larger in magnitude for the umbral regions compared to the penumbra or quiet-sun regions surrounding the sunspots. The variable seeing can induce spurious velocitie...

  9. Bivariate ensemble model output statistics approach for joint forecasting of wind speed and temperature

    Science.gov (United States)

    Baran, Sándor; Möller, Annette

    2017-02-01

    Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.

  10. A new bivariate negative binomial regression model

    Science.gov (United States)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  11. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  12. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  13. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  14. Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    2016-01-01

    Full Text Available On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST, which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters are obtained from least square method and final values are acquired by the genetic algorithm. The ideal axis of cylinder is fitted in MATLAB. Compared to the error results of the least square method, the minimum circumscribed cylinder method, and the maximum inscribed cylinder method, the error result of the minimum zone method conforms to the theory of error evaluation. The results indicate that the method can meet the requirement of engine cylinder bore cylindricity error measuring and evaluating.

  15. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  16. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, Herman B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  17. Simultaneous Strain and Temperature Measurement with Optical Fiber Gratings: Error Analysis

    Institute of Scientific and Technical Information of China (English)

    JIA Hongzhi; LI Yulin

    2000-01-01

    Many schemes designed to simultaneously measure strain and temperature with optical fiber grating sensors have been reported in recent years. In this paper, the influence of systematic errors associated with the measurement process is analyzed and the error formulas are derived. The results are applied to a range of techniques that are of current interest in the literature. The performance of these schemes is contrasted with respect to the influence of wavelength measurement error and sensitivity matrix error.

  18. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Directory of Open Access Journals (Sweden)

    M. D. Wilson

    2014-08-01

    Full Text Available The Surface Water and Ocean Topography (SWOT mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash–Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  19. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2014-08-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  20. Angle measurement error and compensation for decentration rotation of circular gratings

    Institute of Scientific and Technical Information of China (English)

    CHEN Xi-jun; WANG Zhen-huan; ZENG Qing-shuang

    2010-01-01

    As the geometric center of circular grating does not coincide with the rotation center,the angle measurement error of circular grating is analyzed.Based on the moire fringe equations in decentration condition,the mathematical model of angle measurement error is derived.It is concluded that the deeentration between the centre of circular grating and the center of revolving shaft leads to the first-harmonic error of angle measurement.The correctness of the result is proved by experimental data.The method of error compensation is presented,and the angle measurement accuracy of the circular grating is effectively improved by the error compensation.

  1. Specification and Measurement of Mid-Frequency Wavefront Errors

    Institute of Scientific and Technical Information of China (English)

    XUAN Bin; XIE Jing-jiang

    2006-01-01

    Mid-frequency wavefront errors can be of the most importance for some optical components, but they're not explicitly covered by corresponding international standards such as ISO 10110. The testing methods for the errors also have a lot of aspects to be improved. This paper gives an overview of the specifications especially of PSD. NIF,developed by America, and XMM, developed by Europe, have both discovered some new testing methods.

  2. Single-step spatial rotation error separation technique for the ultraprecision measurement of surface profiles.

    Science.gov (United States)

    Hou, Maosheng; Qiu, Lirong; Zhao, Weiqian; Wang, Fan; Liu, Entao; Ji, Lin

    2014-01-20

    To improve the measurement accuracy of the profilometer for large optical surfaces, a new single-step spatial rotation error separation technique (SSEST) is proposed to separate the surface profile error and spindle spatial rotation error, and a novel SSEST-based system for surface profile measurement is developed. In the process of separation, two sets of measured results at the ith measurement circle are obtained before and after the rotation of error separation table, the surface profile error and spatial rotation error of spindle can be determined using discrete Fourier-transform and harmonic analysis. Theoretical analyses and experimental results indicate that SSEST can accurately separate spatial rotation error of spindle from the measured surface profile results within the range of 1-100 upr and improve the accuracy of surface profile measurements.

  3. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

    Science.gov (United States)

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-05-19

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  4. Incomplete Bivariate Fibonacci and Lucas -Polynomials

    Directory of Open Access Journals (Sweden)

    Dursun Tasci

    2012-01-01

    Full Text Available We define the incomplete bivariate Fibonacci and Lucas -polynomials. In the case =1, =1, we obtain the incomplete Fibonacci and Lucas -numbers. If =2, =1, we have the incomplete Pell and Pell-Lucas -numbers. On choosing =1, =2, we get the incomplete generalized Jacobsthal number and besides for =1 the incomplete generalized Jacobsthal-Lucas numbers. In the case =1, =1, =1, we have the incomplete Fibonacci and Lucas numbers. If =1, =1, =1, =⌊(−1/(+1⌋, we obtain the Fibonacci and Lucas numbers. Also generating function and properties of the incomplete bivariate Fibonacci and Lucas -polynomials are given.

  5. Manifest variable path analysis: potentially serious and misleading consequences due to uncorrected measurement error.

    Science.gov (United States)

    Cole, David A; Preacher, Kristopher J

    2014-06-01

    Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.

  6. BIVARIATE FRACTAL INTERPOLATION FUNCTIONS ON RECTANGULAR DOMAINS

    Institute of Scientific and Technical Information of China (English)

    Xiao-yuan Qian

    2002-01-01

    Non-tensor product bivariate fractal interpolation functions defined on gridded rectangular domains are constructed. Linear spaces consisting of these functions are introduced.The relevant Lagrange interpolation problem is discussed. A negative result about the existence of affine fractal interpolation functions defined on such domains is obtained.

  7. BIVARIATE REAL-VALUED ORTHOGONAL PERIODIC WAVELETS

    Institute of Scientific and Technical Information of China (English)

    Qiang Li; Xuezhang Liang

    2005-01-01

    In this paper, we construct a kind of bivariate real-valued orthogonal periodic wavelets. The corresponding decomposition and reconstruction algorithms involve only 8 terms respectively which are very simple in practical computation. Moreover, the relation between periodic wavelets and Fourier series is also discussed.

  8. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  9. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  10. Two-Sample, Bivariate Hypothesis Testing Methods Based on Tukey's Depth.

    Science.gov (United States)

    Wilcox, Rand R.

    2003-01-01

    Conducted simulations to explore methods for comparing bivariate distributions corresponding to two independent groups, all of which are based on Tukey's "depth," a generalization of the notion of ranks to multivariate data. Discusses steps needed to control Type I error. (SLD)

  11. Non-differential measurement error does not always bias diagnostic likelihood ratios towards the null

    Directory of Open Access Journals (Sweden)

    Fosgate GT

    2006-07-01

    Full Text Available Abstract Diagnostic test evaluations are susceptible to random and systematic error. Simulated non-differential random error for six different error distributions was evaluated for its effect on measures of diagnostic accuracy for a brucellosis competitive ELISA. Test results were divided into four categories:

  12. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    Science.gov (United States)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  13. Non-piece-wise error compensation for grating displacement measurement system with absolute zero mark

    Institute of Scientific and Technical Information of China (English)

    Xiaojun Jiang; Huijie Huang; Xiangzhao Wang; Lihua Huang

    2009-01-01

    A method for compensating the measuring error of the grating displacement measurement system with absolute zero mark is presented.It divides the full scale range into piece-wise subsections and compares the maximum variation of the measuring errors of two adjacent subsections with the threshold.Whether the specified subsection is divided into smaller subsections is determined by the comparison result.After different compensation parameters and weighted average values of the random errors are obtained,the error compensation algorithm is implemented in the left and right subsections,and the whole measuring error of the grating displacement measurement system is reduced by about 73%.Experimental results show that the method may not only effectively compensate the spike error but also greatly improve the precision of the measuring system.

  14. Information-theoretic approach to quantum error correction and reversible measurement

    CERN Document Server

    Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard

    1997-01-01

    Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.

  15. Automated suppression of errors in LTP-II slope measurements with x-ray optics. Part1: Review of LTP errors and methods for the error reduction

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Zulfiqar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yashchuk, Valeriy V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2011-05-11

    Systematic error and instrumental drift are the major limiting factors of sub-microradian slope metrology with state-of-the-art x-ray optics. Significant suppression of the errors can be achieved by using an optimal measurement strategy suggested in [Rev. Sci. Instrum. 80, 115101 (2009)]. With this series of LSBL Notes, we report on development of an automated, kinematic, rotational system that provides fully controlled flipping, tilting, and shifting of a surface under test. The system is integrated into the Advanced Light Source long trace profiler, LTP-II, allowing for complete realization of the advantages of the optimal measurement strategy method. We provide details of the system’s design, operational control and data acquisition. The high performance of the system is demonstrated via the results of high precision measurements with a spherical test mirror.

  16. Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement

    Science.gov (United States)

    Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui

    2017-01-01

    Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.

  17. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    Science.gov (United States)

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  18. Measurement errors in dietary assessment using duplicate portions as reference method

    NARCIS (Netherlands)

    Trijsburg, L.E.

    2016-01-01

    Measurement errors in dietary assessment using duplicate portions as reference method Laura Trijsburg Background: As Food Frequency Questionnaires (FFQs) are subject to measurement error, associations between self-reported intake by FFQ and outcome measures should b

  19. Methodical errors of measurement of the human body tissues electrical parameters

    OpenAIRE

    Antoniuk, O.; Pokhodylo, Y.

    2015-01-01

    Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.

  20. Systematic errors in cosmic microwave background polarization measurements

    CERN Document Server

    O'Dea, D; Johnson, B R; Dea, Daniel O'; Challinor, Anthony

    2006-01-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Muller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors throug...

  1. Measurement errors and scaling relations in astrophysics: a review

    CERN Document Server

    Andreon, S

    2012-01-01

    This review article considers some of the most common methods used in astronomy for regressing one quantity against another in order to estimate the model parameters or to predict an observationally expensive quantity using trends between object values. These methods have to tackle some of the awkward features prevalent in astronomical data, namely heteroscedastic (point-dependent) errors, intrinsic scatter, non-ignorable data collection and selection effects, data structure and non-uniform population (often called Malmquist bias), non-Gaussian data, outliers and mixtures of regressions. We outline how least square fits, weighted least squares methods, Maximum Likelihood, survival analysis, and Bayesian methods have been applied in the astrophysics literature when one or more of these features is present. In particular we concentrate on errors-in-variables regression and we advocate Bayesian techniques.

  2. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-10-01

    Full Text Available Coordinate measuring machines (CMM are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I. It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities.

  3. Relative measurement error analysis in the process of the Nakagami-m fading parameter estimation

    Directory of Open Access Journals (Sweden)

    Milentijević Vladeta

    2011-01-01

    Full Text Available An approach to the relative measurement error analysis in the process of the Nakagami-m fading signal moments estimation will be presented in this paper. Relative error expressions will be also derived for the cases when MRC (Maximal Ratio Combining diversity technique is performed at the receiver. Capitalizing on them, results will be graphically presented and discussed to show the influence of various parameters, such as diversity order and fading severity on the relative measurement error bounds.

  4. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  5. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  6. Measuring the achievable error of query sets under differential privacy

    CERN Document Server

    Li, Chao

    2012-01-01

    A common goal of privacy research is to release synthetic data that satisfies a formal privacy guarantee and can be used by an analyst in place of the original data. To achieve reasonable accuracy, a synthetic data set must be tuned to support a specified set of queries accurately, sacrificing fidelity for other queries. This work considers methods for producing synthetic data under differential privacy and investigates what makes a set of queries "easy" or "hard" to answer. We consider answering sets of linear counting queries using the matrix mechanism, a recent differentially-private mechanism that can reduce error by adding complex correlated noise adapted to a specified workload. Our main result is a novel lower bound on the minimum total error required to simultaneously release answers to a set of workload queries. The bound reveals that the hardness of a query workload is related to the spectral properties of the workload when it is represented in matrix form. The bound is tight and, because it satisfi...

  7. Characterizations of some bivariate models using reciprocal coordinate subtangents

    OpenAIRE

    Sreenarayanapurath Madhavan Sunoj; Sreejith Thoppil Bhargavan; Jorge Navarro

    2014-01-01

    In the present paper, we consider the bivariate version of reciprocal coordinate subtangent (RCST) and study its usefulness in characterizing some important bivariate models.  In particular, characterization results are proved for a general bivariate model whose conditional distributions are proportional hazard rate models (see Navarro and Sarabia, 2011), Sarmanov family and Ali-Mikhail-Haq family of bivariate distributions.  We also study the relationship between local dependence function an...

  8. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  9. Measure short separation for space debris based on radar angle error measurement information

    Science.gov (United States)

    Zhang, Yao; Wang, Qiao; Zhou, Lai-jian; Zhang, Zhuo; Li, Xiao-long

    2016-11-01

    With the increasingly frequent human activities in space, number of dead satellites and space debris has increased dramatically, bring greater risks to the available spacecraft, however, the current widespread use of measuring equipment between space target has a lot of problems, such as high development costs or the limited conditions of use. To solve this problem, use radar multi-target measure error information to the space, and combining the relationship between target and the radar station point of view, building horizontal distance decoding model. By adopting improved signal quantization digit, timing synchronization and outliers processing method, improve the measurement precision, satisfies the requirement of multi-objective near distance measurements, and the using efficiency is analyzed. By conducting the validation test, test the feasibility and effectiveness of the proposed methods.

  10. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  11. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  12. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  13. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  14. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    Science.gov (United States)

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-04-29

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.

  15. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    Science.gov (United States)

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  16. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  17. Development of a simple test device for spindle error measurement using a position sensitive detector

    Science.gov (United States)

    Liu, Chien-Hung; Jywe, Wen-Yuh; Lee, Hau-Wei

    2004-09-01

    A new spindle error measurement system has been developed in this paper. It employs a design development rotational fixture with a built-in laser diode and four batteries to replace a precision reference master ball or cylinder used in the traditional method. Two measuring devices with two position sensitive detectors (one is designed for the measurement of the compound X-axis and Y-axis errors and the other is designed with a lens for the measurement of the tilt angular errors) are fixed on the machine table to detect the laser point position from the laser diode in the rotational fixture. When the spindle rotates, the spindle error changes the direction of the laser beam. The laser beam is then divided into two separated beams by a beam splitter. The two separated beams are projected onto the two measuring devices and are detected by two position sensitive detectors, respectively. Thus, the compound motion errors and the tilt angular errors of the spindle can be obtained. Theoretical analysis and experimental tests are presented in this paper to separate the compound errors into two radial errors and tilt angular errors. This system is proposed as a new instrument and method for spindle metrology.

  18. Period, epoch and prediction errors of ephemeris from continuous sets of timing measurements

    CERN Document Server

    Deeg, Hans J

    2015-01-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of such time series is derived: sigma_P = sigma_T (12/( N^3-N))^0.5, where sigma_P is the period error; sigma_T the timing error of a single measurement and N the number of measurements. Relative to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, the usual linear ephemeris where epoch errors are quoted for the first time measurement, are prone to overestimation of the error of that prediction. This may be avoided...

  19. Measurement of implicit associations between emotional states and computer errors using the implicit association test

    Directory of Open Access Journals (Sweden)

    Maricutoiu, Laurentiu P.

    2011-12-01

    Full Text Available Previous research identified two main emotional outcomes of computer error: anxiety and frustration. These emotions have been associated with low levels of performance in using a computer. The present research used innovative methodology for studying the relations between computer error messages, user anxiety and user frustration. We used the Implicit Association Test (IAT to measure automated associations between error messages and these two emotional outcomes. A sample of 80 participants completed two questionnaires and two IAT designs. Results indicated that user error messages are more strongly associated with anxiety, than with frustration. Personal characteristics such as emotional stability and English proficiency were significantly associated with the implicit anxiety measure, but not with the frustration measure. No significant relations were found between two measures of computer experience and the emotional measures. These results indicated that error related anxiety is associated with personal characteristics.

  20. Validation of Large-Scale Geophysical Estimates Using In Situ Measurements with Representativeness Error

    Science.gov (United States)

    Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.

    2015-12-01

    Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.

  1. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    Science.gov (United States)

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  2. The method of translation additive and multiplicative error in the instrumental component of the measurement uncertainty

    Science.gov (United States)

    Vasilevskyi, Olexander M.; Kucheruk, Volodymyr Y.; Bogachuk, Volodymyr V.; Gromaszek, Konrad; Wójcik, Waldemar; Smailova, Saule; Askarova, Nursanat

    2016-09-01

    The paper proposes a method of conversion additive and multiplicative errors, mathematical models are obtained by a Taylor expansion of the transformation equations used measuring instruments in the instrumental component of the measurement uncertainty.

  3. Dyadic Bivariate Fourier Multipliers for Multi-Wavelets in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhongyan Li∗; Xiaodi Xu

    2015-01-01

    The single 2 dilation orthogonal wavelet multipliers in one dimensional case and single A-dilation (where A is any expansive matrix with integer entries and|detA|=2) wavelet multipliers in high dimensional case were completely characterized by the Wutam Consortium (1998) and Z. Y. Li, et al. (2010). But there exist no more results on orthogonal multivariate wavelet matrix multipliers corresponding integer expansive dilation matrix with the absolute value of determinant not 2 in L2(R2). In this paper, we choose as the dilation matrix and consider the 2I2-dilation orthogonal multivariate wavelet Y={y1,y2,y3}, (which is called a dyadic bivariate wavelet) multipliers. We call the 3×3 matrix-valued function A(s)=[ fi,j(s)]3×3, where fi,j are measurable functions, a dyadic bivariate matrix Fourier wavelet multiplier if the inverse Fourier transform of A(s)(cy1(s),cy2(s),cy3(s))⊤ = ( b g1(s), b g2(s), b g3(s))⊤ is a dyadic bivariate wavelet whenever (y1,y2,y3) is any dyadic bivariate wavelet. We give some conditions for dyadic matrix bivariate wavelet multipliers. The results extended that of Z. Y. Li and X. L. Shi (2011). As an application, we construct some useful dyadic bivariate wavelets by using dyadic Fourier matrix wavelet multipliers and use them to image denoising.

  4. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  5. Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea

    Science.gov (United States)

    Shin, S.; Kim, Y.; Jung, C.

    2010-12-01

    The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta

  6. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  7. Characterizations of some bivariate models using reciprocal coordinate subtangents

    Directory of Open Access Journals (Sweden)

    Sreenarayanapurath Madhavan Sunoj

    2014-06-01

    Full Text Available In the present paper, we consider the bivariate version of reciprocal coordinate subtangent (RCST and study its usefulness in characterizing some important bivariate models.  In particular, characterization results are proved for a general bivariate model whose conditional distributions are proportional hazard rate models (see Navarro and Sarabia, 2011, Sarmanov family and Ali-Mikhail-Haq family of bivariate distributions.  We also study the relationship between local dependence function and reciprocal subtangent and a characterization result is proved for a bivariate model proposed by Jones (1998.  Further, the concept of reciprocal coordinate subtangent is extended to conditionally specified models.

  8. Sources of errors in the measurements of underwater profiling radiometer

    Digital Repository Service at National Institute of Oceanography (India)

    Silveira, N.; Suresh, T.; Talaulikar, M.; Desa, E.; Matondkar, S.G.P.; Lotlikar, A.

    to meet the stringent quality requirements of marine optical data for satellite ocean color sensor validation, development of algorithms and other related applications, it is very essential to take great care while measuring these parameters. There are two... cloud patches during measurements, depth offset adjustment to determine the exact surface depth, mounting of the reference solar irradiance sensor, tilt of the instrument with respect to the vertical, and correction due to temperature variations and dark...

  9. Slide error measurement of a large-scale ultra-precision lathe

    Science.gov (United States)

    Lee, Jung Chul; Gao, Wei; Noh, Young Jin; Hwang, Joo Ho; Oh, Jeoung Seok; Park, Chun Hong

    2010-08-01

    This paper presents the measurement of the slide error of a large-scale ultra-precision lathe with an effective fabricating length of 2000 mm. A cylinder workpiece with a diameter of 320 mm and a length of 1500 mm was mounted on the spindle of the lathe with its rotational axis along the Z-direction. Two capacitive displacement probes with a measurement range of 100 μm were mounted on the slide of lathe with its moving axis along the Z-direction. The displacement probes were placed on the two sides of the cylinder workpiece over the horizontal plane (XZ-plane). The cylinder workpiece, which was rotated by the spindle, was scanned by the displacement probes moved by the slide. The X-directional horizontal slide error can be accurately evaluated from the probe outputs by using a proposed rotatingreversal method through separating the influences of the form error of the cylinder workpiece and the rotational error of the spindle. In addition to the out-of-straightness error component, the parallelism error component with respect to the spindle axis, can also be evaluated. The out-of-straightness error component and the parallelism error component of the slide error were measured to be 3.3 μm and 1.68 arc-seconds over a slide travel range of 1450.08 mm, respectively.

  10. Errors and uncertainties in the measurement of ultrasonic wave attenuation and phase velocity.

    Science.gov (United States)

    Kalashnikov, Alexander N; Challis, Richard E

    2005-10-01

    This paper presents an analysis of the error generation mechanisms that affect the accuracy of measurements of ultrasonic wave attenuation coefficient and phase velocity as functions of frequency. In the first stage of the analysis we show that electronic system noise, expressed in the frequency domain, maps into errors in the attenuation and the phase velocity spectra in a highly nonlinear way; the condition for minimum error is when the total measured attenuation is around 1 Neper. The maximum measurable total attenuation has a practical limit of around 6 Nepers and the minimum measurable value is around 0.1 Neper. In the second part of the paper we consider electronic noise as the primary source of measurement error; errors in attenuation result from additive noise whereas errors in phase velocity result from both additive noise and system timing jitter. Quantization noise can be neglected if the amplitude of the additive noise is comparable with the quantization step, and coherent averaging is employed. Experimental results are presented which confirm the relationship between electronic noise and measurement errors. The analytical technique is applicable to the design of ultrasonic spectrometers, formal assessment of the accuracy of ultrasonic measurements, and the optimization of signal processing procedures to achieve a specified accuracy.

  11. Register mark measurement errors in high-precision roll-to-roll continuous systems: The effect of register mark geometry on measurement error

    Science.gov (United States)

    Lee, Jongsu; Isto, Pekka; Jeong, Hakyung; Park, Janghoon; Lee, Dongjin; Shin, Kee-Hyun

    2016-10-01

    It is important to achieve high-precision register control in roll-to-roll continuous printing systems. Thus far, many studies on the dynamics of registers and tension and on register control techniques have identified register control as a problem of controlling and minimizing the disturbance of strain of the substrate. However, register control using printed register marks is necessary, and printing defects in creating these marks cause measurement errors. This study demonstrates by experimental verification that the measurement error is generated by the widening and agglomeration of the register mark. Furthermore, the error is shown to differ with the size and shape of the mark under identical printing conditions. The results illustrate the importance of improving the printing quality of the register mark, selecting the desired geometry for register marks with regard to printability, and utilizing an edge-detection algorithm in the control program for high-precision register control.

  12. Measurement and analysis of typical motion error traces from a circular test

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The circular test provides a rapid and efficient way of measuring the contouring accuracy of a machine tool.To get the actual point coordinate in the work plane,an improved measurement instrument - a new ball bar test system - is presented in this paper to identify both the radial error and the rotation angle error when the machine is manipulated to move in circular traces.Based on the measured circular error,a combination of Fourier components is chosen to represent the systematic form error that fluctuates in the radial direction.The typical motion errors represented by the corresponding Fourier components can thus be identified.The values for machine compensation can be calculated and adjusted until the desired results are achieved.

  13. A measuring and correcting method about locus errors in robot welding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very difficult to weld automatically and continuously by the method of teaching and playing. In this paper, a kind of error measuring system is presented. By which the position and orientation errors of tubules relative to the teaching one can be measured. And, a method to correct the locus errors is also proposed, by which the moving locus planned via teaching points can be corrected in real time according to measured error parameters. So that, just by teaching one, all tubules on a bobbin could be welded automatically.

  14. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    Science.gov (United States)

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  15. Measurement of Root-Mean-Square Phase Errors in Arrayed Waveguide Gratings

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiao-Ping; CHU Yuan-Liang; ZHAO Wei; ZHANG Han-Yi; GUO Yi-Li

    2004-01-01

    @@ The interference-based method to measure the root-mean-square phase errors in SiO2-based arrayed waveguide gratings (AWGs) is presented. The experimental results show that the rms phase error of the tested AWG is 0. 72 rad.

  16. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela

    2016-05-11

    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  17. A bivariate chromatic polynomial for signed graphs

    CERN Document Server

    Beck, Matthias

    2012-01-01

    We study Dohmen--P\\"onitz--Tittmann's bivariate chromatic polynomial $c_\\Gamma(k,l)$ which counts all $(k+l)$-colorings of a graph $\\Gamma$ such that adjacent vertices get different colors if they are $\\le k$. Our first contribution is an extension of $c_\\Gamma(k,l)$ to signed graphs, for which we obtain an inclusion--exclusion formula and several special evaluations giving rise, e.g., to polynomials that encode balanced subgraphs. Our second goal is to derive combinatorial reciprocity theorems for $c_\\Gamma(k,l)$ and its signed-graph analogues, reminiscent of Stanley's reciprocity theorem linking chromatic polynomials to acyclic orientations.

  18. The structure of bivariate rational hypergeometric functions

    CERN Document Server

    Cattani, Eduardo; Villegas, Fernando Rodriguez

    2009-01-01

    We describe the structure of all codimension-two lattice configurations $A$ which admit a stable rational $A$-hypergeometric function, that is a rational function $F$ all whose partial derivatives are non zero, and which is a solution of the $A$-hypergeometric system of partial differential equations defined by Gel'fand, Kapranov and Zelevinsky. We show, moreover, that all stable rational $A$-hypergeometric functions may be described by toric residues and apply our results to study the rationality of bivariate series whose coefficients are quotients of factorials of linear forms.

  19. APPROXIMATE SAMPLING THEOREM FOR BIVARIATE CONTINUOUS FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨守志; 程正兴; 唐远炎

    2003-01-01

    An approximate solution of the refinement equation was given by its mask, and the approximate sampling theorem for bivariate continuous function was proved by applying the approximate solution. The approximate sampling function defined uniquely by the mask of the refinement equation is the approximate solution of the equation, a piece-wise linear function, and posseses an explicit computation formula. Therefore the mask of the refinement equation is selected according to one' s requirement, so that one may controll the decay speed of the approximate sampling function.

  20. Position error correction in absolute surface measurement based on a multi-angle averaging method

    Science.gov (United States)

    Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin

    2017-04-01

    We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.

  1. Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement

    Institute of Scientific and Technical Information of China (English)

    WANG Jiongqi; XIONG Kai; ZHOU Haiyin

    2012-01-01

    The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.

  2. Conservative error measures for classical and quantum metrology

    CERN Document Server

    Tsang, Mankei

    2016-01-01

    The classical and quantum Cram\\'er-Rao bounds have become standard measures of parameter-estimation uncertainty for a variety of sensing and imaging applications in recent years, but their assumption of unbiased estimators potentially undermines their significance as fundamental limits. In this note we advocate a Bayesian approach with Van Trees inequalities and worst-case priors to overcome the problem. Applications to superlocalization and gravitational-wave parameter estimation are discussed.

  3. A Nonlinear Consensus Protocol of Multiagent Systems Considering Measuring Errors

    Directory of Open Access Journals (Sweden)

    Xiaochu Wang

    2013-01-01

    Full Text Available In order to avoid a potential waste of energy during consensus controls in the case where there exist measurement uncertainties, a nonlinear protocol is proposed for multiagent systems under a fixed connected undirected communication topology and extended to both the cases with full and partial access a reference. Distributed estimators are utilized to help all agents agree on the understandings of the reference, even though there may be some agents which cannot access to the reference directly. An additional condition is also considered, where self-known configuration offsets are desired. Theoretical analyses of stability are given. Finally, simulations are performed, and results show that the proposed protocols can lead agents to achieve loose consensus and work effectively with less energy cost to keep the formation, which have illustrated the theoretical results.

  4. Offset Error Compensation in Roundness Measurement%圆度测量中偏置误差的软补偿

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  5. Effect of Measurement Errors on Predicted Cosmological Constraints from Shear Peak Statistics with LSST

    CERN Document Server

    Bard, D; Chang, C; May, M; Kahn, S M; AlSayyad, Y; Ahmad, Z; Bankert, J; Connolly, A; Gibson, R R; Gilmore, K; Grace, E; Haiman, Z; Hannel, M; Huffenberger, K M; Jernigan, J G; Jones, L; Krughoff, S; Lorenz, S; Marshall, S; Meert, A; Nagarajan, S; Peng, E; Peterson, J; Rasmussen, A P; Shmakova, M; Sylvestre, N; Todd, N; Young, M

    2013-01-01

    The statistics of peak counts in reconstructed shear maps contain information beyond the power spectrum, and can improve cosmological constraints from measurements of the power spectrum alone if systematic errors can be controlled. We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST image simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  6. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  7. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  8. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    Science.gov (United States)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  9. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    OpenAIRE

    Dennis J. Dunning; Ross, Quentin E.; Munch, Stephan B.; Ginzburg, Lev R.

    2002-01-01

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimat...

  10. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  11. Flexible error-reduction method for shape measurement by temporal phase unwrapping: phase averaging method.

    Science.gov (United States)

    Yong, Liu; Dingfa, Huang; Yong, Jiang

    2012-07-20

    Temporal phase unwrapping is an important method for shape measurement in structured light projection. Its measurement errors mainly come from both the camera noise and nonlinearity. Analysis found that least-squares fitting cannot completely eliminate nonlinear errors, though it can significantly reduce the random errors. To further reduce the measurement errors of current temporal phase unwrapping algorithms, in this paper, we proposed a phase averaging method (PAM) in which an additional fringe sequence at the highest fringe density is employed in the process of data processing and the phase offset of each set of the four frames is carefully chosen according to the period of the phase nonlinear errors, based on fast classical temporal phase unwrapping algorithms. This method can decrease both the random errors and the systematic errors with statistical averaging. In addition, the length of the additional fringe sequence can be changed flexibly according to the precision of the measurement. Theoretical analysis and simulation experiment results showed the validity of the proposed method.

  12. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    Science.gov (United States)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  13. Comparison of transmission error predictions with noise measurements for several spur and helical gears

    Science.gov (United States)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-06-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  14. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  15. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  16. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  17. Bivariate correlation coefficients in family-type clustered studies.

    Science.gov (United States)

    Luo, Jingqin; D'Angela, Gina; Gao, Feng; Ding, Jimin; Xiong, Chengjie

    2015-11-01

    We propose a unified approach based on a bivariate linear mixed effects model to estimate three types of bivariate correlation coefficients (BCCs), as well as the associated variances between two quantitative variables in cross-sectional data from a family-type clustered design. These BCCs are defined at different levels of experimental units including clusters (e.g., families) and subjects within clusters and assess different aspects on the relationships between two variables. We study likelihood-based inferences for these BCCs, and provide easy implementation using standard software SAS. Unlike several existing BCC estimators in the literature on clustered data, our approach can seamlessly handle two major analytic challenges arising from a family-type clustered design: (1) many families may consist of only one single subject; (2) one of the paired measurements may be missing for some subjects. Hence, our approach maximizes the use of data from all subjects (even those missing one of the two variables to be correlated) from all families, regardless of family size. We also conduct extensive simulations to show that our estimators are superior to existing estimators in handling missing data or/and imbalanced family sizes and the proposed Wald test maintains good size and power for hypothesis testing. Finally, we analyze a real-world Alzheimer's disease dataset from a family clustered study to investigate the BCCs across different modalities of disease markers including cognitive tests, cerebrospinal fluid biomarkers, and neuroimaging biomarkers.

  18. COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE

    Directory of Open Access Journals (Sweden)

    A. K. Tyavlovsky

    2013-01-01

    Full Text Available The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the second and the first harmonics’ amplitudes.

  19. The holographic reconstructing algorithm and its error analysis about phase-shifting phase measurement

    Institute of Scientific and Technical Information of China (English)

    LU Xiaoxu; ZHONG Liyun; ZHANG Yimo

    2007-01-01

    Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.

  20. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  1. Errors in Thermographic Camera Measurement Caused by Known Heat Sources and Depth Based Correction

    Directory of Open Access Journals (Sweden)

    Mark Christian E. Manuel

    2016-03-01

    Full Text Available Thermal imaging has shown to be a better tool for the quantitative measurement of temperature than single spot infrared thermometers. However, thermographic cameras can encounter errors in acquiring accurate temperature measurements in the presence of other environmental heat sources. Some of these errors arise due to the inability of the thermal camera to detect objects and features in the infrared domain. In this paper, the thermal image is registered as a stereo image from a Kinect system prior to depth-based correction. Experiments demonstrating the error are presented together with the determination of the measurement errors under prior knowledge of the thermographed scene. The proposed correction scheme improves the accuracy of the thermal image through augmentation using the Kinect system.

  2. Determining sexual dimorphism in frog measurement data: integration of statistical significance, measurement error, effect size and biological significance

    Directory of Open Access Journals (Sweden)

    Hayek Lee-Ann C.

    2005-01-01

    Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.

  3. A stochastic model for the analysis of bivariate longitudinal AIDS data.

    Science.gov (United States)

    Sy, J P; Taylor, J M; Cumberland, W G

    1997-06-01

    We present a model for multivariate repeated measures that incorporates random effects, correlated stochastic processes, and measurement errors. The model is a multivariate generalization of the model for univariate longitudinal data given by Taylor, Cumberland, and Sy (1994, Journal of the American Statistical Association 89, 727-736). The stochastic process used in this paper is the multivariate integrated Ornstein-Uhlenbeck (OU) process, which includes Brownian motion and a random effects model as special limiting cases. This process is an underlying continuous-time autoregressive order [AR(1)] process for the derivatives of the multivariate observations. The model allows unequally spaced observations and missing values for some of the variables. We analyze CD4 T-cell and beta-2-microglobulin measurements of the seroconverters at multiple time points from the Los Angeles section of the Multicenter AIDS Cohort Study. The model allows us to investigate the relationship between CD4 and beta-2-microglobulin through the correlations between their random effects and their serial correlation. The data suggest that CD4 and beta-2-microglobulin follow a bivariate Brownian motion process. The fit of the model implies that an increase in beta-2-microglobulin is associated with a decrease in future CD4 but not vice versa, agreeing with immunologic postulates about the relationship between these two variables.

  4. Bayesian Data Analysis with the Bivariate Hierarchical Ornstein-Uhlenbeck Process Model.

    Science.gov (United States)

    Oravecz, Zita; Tuerlinckx, Francis; Vandekerckhove, Joachim

    2016-01-01

    In this paper, we propose a multilevel process modeling approach to describing individual differences in within-person changes over time. To characterize changes within an individual, repeated measures over time are modeled in terms of three person-specific parameters: a baseline level, intraindividual variation around the baseline, and regulatory mechanisms adjusting toward baseline. Variation due to measurement error is separated from meaningful intraindividual variation. The proposed model allows for the simultaneous analysis of longitudinal measurements of two linked variables (bivariate longitudinal modeling) and captures their relationship via two person-specific parameters. Relationships between explanatory variables and model parameters can be studied in a one-stage analysis, meaning that model parameters and regression coefficients are estimated simultaneously. Mathematical details of the approach, including a description of the core process model-the Ornstein-Uhlenbeck model-are provided. We also describe a user friendly, freely accessible software program that provides a straightforward graphical interface to carry out parameter estimation and inference. The proposed approach is illustrated by analyzing data collected via self-reports on affective states.

  5. An AFM-based methodology for measuring axial and radial error motions of spindles

    Science.gov (United States)

    Geng, Yanquan; Zhao, Xuesen; Yan, Yongda; Hu, Zhenjiang

    2014-05-01

    This paper presents a novel atomic force microscopy (AFM)-based methodology for measurement of axial and radial error motions of a high precision spindle. Based on a modified commercial AFM system, the AFM tip is employed as a cutting tool by which nano-grooves are scratched on a flat surface with the rotation of the spindle. By extracting the radial motion data of the spindle from the scratched nano-grooves, the radial error motion of the spindle can be calculated after subtracting the tilting errors from the original measurement data. Through recording the variation of the PZT displacement in the Z direction in AFM tapping mode during the spindle rotation, the axial error motion of the spindle can be obtained. Moreover the effects of the nano-scratching parameters on the scratched grooves, the tilting error removal method for both conditions and the method of data extraction from the scratched groove depth are studied in detail. The axial error motion of 124 nm and the radial error motion of 279 nm of a commercial high precision air bearing spindle are achieved by this novel method, which are comparable with the values provided by the manufacturer, verifying this method. This approach does not need an expensive standard part as in most conventional measurement approaches. Moreover, the axial and radial error motions of the spindle can both be obtained, indicating that this is a potential means of measuring the error motions of the high precision moving parts of ultra-precision machine tools in the future.

  6. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    Science.gov (United States)

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-02

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications.

  7. Experimental test of error-disturbance uncertainty relations by weak measurement.

    Science.gov (United States)

    Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi

    2014-01-17

    We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.

  8. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of t...

  9. [Instrumentation for blood pressure measurements: historical aspects, concepts and sources of error].

    Science.gov (United States)

    de Araujo, T L; Arcuri, E A; Martins, E

    1998-04-01

    According to the International Council of Nurses the measurement of blood pressure is the procedure most performed by nurses in all the world. The aim of this study is to analyse the polemical aspects of instruments used in blood pressure measurement. Considering the analyses of the literature and the American Heart Association Recommendations, the main source of errors when measuring blood pressure are discussed.

  10. Displacement sensor with controlled measuring force and its error analysis and precision verification

    Science.gov (United States)

    Yang, Liangen; Wang, Xuanze; Lv, Wei

    2011-05-01

    A displacement sensor with controlled measuring force and its error analysis and precision verification are discussed in this paper. The displacement sensor consists of an electric induction transducer with high resolution and a voice coil motor (VCM). The measuring principles, structure, method enlarging measuring range, signal process of the sensor are discussed. The main error sources such as parallelism error and incline of framework by unequal length of leaf springs, rigidity of measuring rods, shape error of stylus, friction between iron core and other parts, damping of leaf springs, variation of voltage, linearity of induction transducer, resolution and stability are analyzed. A measuring system for surface topography with large measuring range is constructed based on the displacement sensor and 2D moving platform. Measuring precision and stability of the measuring system is verified. Measuring force of the sensor in measurement process of surface topography can be controlled at μN level and hardly changes. It has been used in measurement of bearing ball, bullet mark, etc. It has measuring range up to 2mm and precision of nm level.

  11. The analysis and measurement of motion errors of the linear slide in fast tool servo diamond turning machine

    Directory of Open Access Journals (Sweden)

    Xu Zhang

    2015-03-01

    Full Text Available This article proposes a novel method for identifying the motion errors (mainly straightness error and angular error of a linear slide, which is based on the laser interferometry technique integrated with the shifting method. First, the straightness error of a linear slide incorporated with angular error (pitch error in the vertical direction and yaw error in the horizontal direction is schematically explained. Then, a laser interferometry–based system is constructed to measure the motion errors of a linear slide, and an algorithm of error separation technique for extracting the straightness error, angular error, and tilt angle error caused by the motion of the reflector is developed. In the proposed method, the reflector is mounted on the slide moving along the guideway. The light-phase variation of two interfering laser beams can identify the lateral translation error of the slide. The differential outputs sampled with shifting initial point at the same datum line are applied to evaluate the angular error of the slide. Furthermore, the yaw error of the slide is measured by a laser interferometer in laboratory environment and compared with the evaluated values. Experimental results demonstrate that the proposed method possesses the advantages of reducing the effects caused by the assembly error and the tilt angle errors caused by movement of the reflector, adapting to long- or short-range measurement, and operating the measurement experiment conveniently and easily.

  12. A Universal Generator for Bivariate Log-Concave Distributions

    OpenAIRE

    Hörmann, Wolfgang

    1995-01-01

    Different universal (also called automatic or black-box) methods have been suggested to sample from univariate log-concave distributions. The description of a universal generator for bivariate distributions has not been published up to now. The new algorithm for bivariate log-concave distributions is based on the method of transformed density rejection. In order to construct a hat function for a rejection algorithm the bivariate density is transformed by the logarithm into a concave function....

  13. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    An experimental analysis has been performed to validate the measurement error of cooling curves measured in thin walled ductile cast iron. Specially designed thermocouples with Ø0.2 mm thermocouple wire in Ø1.6 mm ceramic tube was used for the experiments. Temperatures were measured in plates...... to a level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained...

  14. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    Science.gov (United States)

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  15. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  16. Task committee on experimental uncertainty and measurement errors in hydraulic engineering: An update

    Science.gov (United States)

    Wahlin, B.; Wahl, T.; Gonzalez-Castro, J. A.; Fulford, J.; Robeson, M.

    2005-01-01

    As part of their long range goals for disseminating information on measurement techniques, instrumentation, and experimentation in the field of hydraulics, the Technical Committee on Hydraulic Measurements and Experimentation formed the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering in January 2003. The overall mission of this Task Committee is to provide information and guidance on the current practices used for describing and quantifying measurement errors and experimental uncertainty in hydraulic engineering and experimental hydraulics. The final goal of the Task Committee on Experimental Uncertainty and Measurement Errors in Hydraulic Engineering is to produce a report on the subject that will cover: (1) sources of error in hydraulic measurements, (2) types of experimental uncertainty, (3) procedures for quantifying error and uncertainty, and (4) special practical applications that range from uncertainty analysis for planning an experiment to estimating uncertainty in flow monitoring at gaging sites and hydraulic structures. Currently, the Task Committee has adopted the first order variance estimation method outlined by Coleman and Steele as the basic methodology to follow when assessing the uncertainty in hydraulic measurements. In addition, the Task Committee has begun to develop its report on uncertainty in hydraulic engineering. This paper is intended as an update on the Task Committee's overall progress. Copyright ASCE 2005.

  17. Phase error analysis and compensation considering ambient light for phase measuring profilometry

    Science.gov (United States)

    Zhou, Ping; Liu, Xinran; He, Yi; Zhu, Tongjing

    2014-04-01

    The accuracy of phase measuring profilometry (PMP) system based on phase-shifting method is susceptible to gamma non-linearity of the projector-camera pair and uncertain ambient light inevitably. Although many researches on gamma model and phase error compensation methods have been implemented, the effect of ambient light is not explicit all along. In this paper, we perform theoretical analysis and experiments of phase error compensation taking account of both gamma non-linearity and uncertain ambient light. First of all, a mathematical phase error model is proposed to illustrate the reason of phase error generation in detail. We propose that the phase error is related not only to the gamma non-linearity of the projector-camera pair, but also to the ratio of intensity modulation to average intensity in the fringe patterns captured by the camera which is affected by the ambient light. Subsequently, an accurate phase error compensation algorithm is proposed based on the mathematical model, where the relationship between phase error and ambient light is illustrated. Experimental results with four-step phase-shifting PMP system show that the proposed algorithm can alleviate the phase error effectively even though the ambient light is considered.

  18. Bivariate Rayleigh Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2007-01-01

    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  19. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  20. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    Science.gov (United States)

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  1. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    Science.gov (United States)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  2. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  3. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  4. Determination of error measurement by means of the basic magnetization curve

    Science.gov (United States)

    Lankin, M. V.; Lankin, A. M.

    2016-04-01

    The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.

  5. Thresholds for Correcting Errors, Erasures, and Faulty Syndrome Measurements in Degenerate Quantum Codes.

    Science.gov (United States)

    Dumer, Ilya; Kovalev, Alexey A; Pryadko, Leonid P

    2015-07-31

    We suggest a technique for constructing lower (existence) bounds for the fault-tolerant threshold to scalable quantum computation applicable to degenerate quantum codes with sublinear distance scaling. We give explicit analytic expressions combining probabilities of erasures, depolarizing errors, and phenomenological syndrome measurement errors for quantum low-density parity-check codes with logarithmic or larger distances. These threshold estimates are parametrically better than the existing analytical bound based on percolation.

  6. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  7. A newly conceived cylinder measuring machine and methods that eliminate the spindle errors

    Science.gov (United States)

    Vissiere, A.; Nouira, H.; Damak, M.; Gibaru, O.; David, J.-M.

    2012-09-01

    Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method.

  8. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    Science.gov (United States)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  9. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    Science.gov (United States)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  10. A Robust Skin Colour Segmentation Using Bivariate Pearson Type IIαα (Bivariate Beta) Mixture Model

    OpenAIRE

    B.N.Jagadesh; Srinivasa Rao, K.; Ch.Satyanarayana

    2012-01-01

    Probability distributions formulate the basic framework for developing several segmentation algorithms. Among the various segmentation algorithms, skin colour segmentation is one of the most important algorithms for human computer interaction. Due to various random factors influencing the colour space, there does not exist a unique algorithm which serve the purpose of all images. In this paper a novel and new skin colour segmentation algorithms is proposed based on bivariate Pearson type I...

  11. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  12. Estimation of bias errors in measured airplane responses using maximum likelihood method

    Science.gov (United States)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  13. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  14. Research of measurement errors caused by salt solution temperature drift in surface plasmon resonance sensors

    Institute of Scientific and Technical Information of China (English)

    Yingcai Wu; Zhengtian Gu; YifangYuan

    2006-01-01

    @@ Influence of temperature on measurement of surface plasmon resonance (SPR) sensor was investigated.Samples with various concentrations of NaCI were tested at different temperatures. It was shown that if the affection of temperature could be neglected, measurement precision of salt solution was 0.028 wt.-%.But measurement error of salinity caused by temperature was 0.53 wt.-% in average when the temperature drift was 1 ℃. To reduce the error, a double-cell SPR sensor with salt solution and distilled water flowing respectively and at the same temperature was implemented.

  15. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Science.gov (United States)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major

  16. THE JOINT DISTRIBUTION OF BIVARIATE EXPONENTIAL UNDER LINEARLY RELATED MODEL

    Directory of Open Access Journals (Sweden)

    Norou Diawara

    2010-09-01

    Full Text Available In this paper, fundamental results of the joint distribution of the bivariate exponential distributions are established.  The positive support multivariate distribution theory is important in reliability and survival analysis, and we applied it to the case where more than one failure or survival is observed in a given study. Usually, the multivariate distribution is restricted to those with marginal distributions of a specified and familiar lifetime family. The family of exponential distribution contains the absolutely continuous and discrete case models with a nonzero probability on a set of measure zero. Examples are given, and estimators are developed and applied to simulated data. Our findings generalize substantially known results in the literature, provide flexible and novel approach for modeling related events that can occur simultaneously from one based event.

  17. The relative performance of bivariate causality tests in small samples

    NARCIS (Netherlands)

    Bult, J..R.; Leeflang, P.S.H.; Wittink, D.R.

    1997-01-01

    Causality tests have been applied to establish directional effects and to reduce the set of potential predictors, For the latter type of application only bivariate tests can be used, In this study we compare bivariate causality tests. Although the problem addressed is general and could benefit resea

  18. Bivariate Recursive Equations on Excess-of-loss Reinsurance

    Institute of Scientific and Technical Information of China (English)

    Jing Ping YANG; Shi Hong CHENG; Xiao Qian WANG

    2007-01-01

    This paper investigates bivariate recursive equations on excess-of-loss reinsurance.For an insurance portfolio, under the assumptions that the individual claim severity distribution has bounded continuous density and the number of claims belongs to R1(a,b) family, bivariate recursive equations for the joint distribution of the cedent's aggregate claims and the reinsurer's aggre gate claims are obtained.

  19. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  20. The effect of genotyping errors on the robustness of composite linkage disequilibrium measures

    Indian Academy of Sciences (India)

    Yu Mei Li; Yang Xiang

    2011-12-01

    We conclude that composite linkage disequilibrium (LD) measures be adopted in population-based LD mapping or association mapping studies since it is unaffected by Hardy–Weinberg disequilibrium. Although some properties of composite LD measures have been recently studied, the effects of genotyping errors on composite LD measures have not been examined. In this report, we derived deterministic formulas to evaluate the impact of genotyping errors on the composite LD measures $\\Delta'_{AB}$ and $r_{AB}$, and compared the robustness of $\\Delta'_{AB}$ and $r_{AB}$ in the presence of genotyping errors. The results showed that $\\Delta'_{AB}$ and $r_{AB}$ depend on the allele frequencies and the assumed error model, and show varying degrees of robustness in the presence of errors. In general, whether there is HWD or not, $r_{AB}$ is more robust than $\\Delta'_{AB}$ except some special cases and the difference of robustness between $\\Delta'_{AB}$ and $r_{AB}$ becomes less severe as the difference between the frequencies of two SNP alleles and becomes smaller.

  1. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  2. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  3. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    Science.gov (United States)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  4. Theory confronts experiment in the Casimir force measurements: quantification of errors and precision

    CERN Document Server

    Chen, F; Mohideen, U; Mostepanenko, V M

    2004-01-01

    We compare theory and experiment in the Casimir force measurement between gold surfaces performed with the atomic force microscope. Both random and systematic experimental errors are found leading to a total absolute error equal to 8.5 pN at 95% confidence. In terms of the relative errors, experimental precision of 1.75% is obtained at the shortest separation of 62 nm at 95% confidence level (at 60% confidence the experimental precision of 1% is confirmed at the shortest separation). An independent determination of the accuracy of the theoretical calculations of the Casimir force and its application to the experimental configuration is carefully made. Special attention is paid to the sample-dependent variations of the optical tabulated data due to the presence of grains, contribution of surface plasmons, and errors introduced by the use of the proximity force theorem. Nonmultiplicative and diffraction-type contributions to the surface roughness corrections are examined. The electric forces due to patch potent...

  5. Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors

    Institute of Scientific and Technical Information of China (English)

    Xiao-bing ZHAO; Xian ZHOU

    2012-01-01

    This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.

  6. Real time remaining useful life prediction based on nonlinear Wiener based degradation processes with measurement errors

    Institute of Scientific and Technical Information of China (English)

    唐圣金; 郭晓松; 于传强; 周志杰; 周召发; 张邦成

    2014-01-01

    Real time remaining useful life (RUL) prediction based on condition monitoring is an essential part in condition based maintenance (CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item’s individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.

  7. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    Science.gov (United States)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  8. Stress-strength reliability for general bivariate distributions

    Directory of Open Access Journals (Sweden)

    Alaa H. Abdel-Hamid

    2016-10-01

    Full Text Available An expression for the stress-strength reliability R=P(X1bivariate distribution. Such distribution includes bivariate compound Weibull, bivariate compound Gompertz, bivariate compound Pareto, among others. In the parametric case, the maximum likelihood estimates of the parameters and reliability function R are obtained. In the non-parametric case, point and interval estimates of R are developed using Govindarajulu's asymptotic distribution-free method when X1 and X2 are dependent. An example is given when the population distribution is bivariate compound Weibull. Simulation is performed, based on different sample sizes to study the performance of estimates.

  9. Impact of measurement error on testing genetic association with quantitative traits.

    Directory of Open Access Journals (Sweden)

    Jiemin Liao

    Full Text Available Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10(-5 for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable.

  10. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...

  11. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  12. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    2017-03-29

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurement errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.

  13. A MEASURING SYSTEM WITH AN ADDITIONAL CHANNEL FOR ELIMINATING THE DYNAMIC ERROR

    Directory of Open Access Journals (Sweden)

    Dichev Dimitar

    2014-03-01

    Full Text Available The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.

  14. The bivariate Rogers Szegö polynomials

    Science.gov (United States)

    Chen, William Y. C.; Saad, Husam L.; Sun, Lisa H.

    2007-06-01

    We present an operator approach to deriving Mehler's formula and the Rogers formula for the bivariate Rogers-Szegö polynomials hn(x, y|q). The proof of Mehler's formula can be considered as a new approach to the nonsymmetric Poisson kernel formula for the continuous big q-Hermite polynomials Hn(x; a|q) due to Askey, Rahman and Suslov. Mehler's formula for hn(x, y|q) involves a 3phi2 sum and the Rogers formula involves a 2phi1 sum. The proofs of these results are based on parameter augmentation with respect to the q-exponential operator and the homogeneous q-shift operator in two variables. By extending recent results on the Rogers-Szegö polynomials hn(x|q) due to Hou, Lascoux and Mu, we obtain another Rogers-type formula for hn(x, y|q). Finally, we give a change of base formula for Hn(x; a|q) which can be used to evaluate some integrals by using the Askey-Wilson integral.

  15. The bivariate Rogers-Szegoe polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Chen, William Y C [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China); Saad, Husam L [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China); Sun, Lisa H [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China)

    2007-06-08

    We present an operator approach to deriving Mehler's formula and the Rogers formula for the bivariate Rogers-Szegoe polynomials h{sub n}(x, y vertical bar q). The proof of Mehler's formula can be considered as a new approach to the nonsymmetric Poisson kernel formula for the continuous big q-Hermite polynomials H{sub n}(x; a vertical bar q) due to Askey, Rahman and Suslov. Mehler's formula for h{sub n}(x, y vertical bar q) involves a {sub 3}{phi}{sub 2} sum and the Rogers formula involves a {sub 2}{phi}{sub 1} sum. The proofs of these results are based on parameter augmentation with respect to the q-exponential operator and the homogeneous q-shift operator in two variables. By extending recent results on the Rogers-Szegoe polynomials h{sub n}(x vertical bar q) due to Hou, Lascoux and Mu, we obtain another Rogers-type formula for h{sub n}(x, y vertical bar q). Finally, we give a change of base formula for H{sub n}(x; a vertical bar q) which can be used to evaluate some integrals by using the Askey-Wilson integral.

  16. Simulation of Current Measurement Using Magnetic Sensor Arrays and Its Error Model

    Institute of Scientific and Technical Information of China (English)

    WANGJing; YAOJian-jun; WANGJian-hua

    2004-01-01

    Magnetic sensor arrays are proposed to measure electric current in a non-contac tway. In order to achieve higher accuracy, signal processing techniques for magnetic sensor arrays are utilized. Simulation techniques are necessary to study the factors influencing the accuracy of current measurement. This paper presents a simulation method to estimate the impact of sensing area and position of sensors on the accuracy of current measurement. Several error models are built up to support computer-aided design of magnetic sensor arrays.

  17. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  18. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  19. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    Science.gov (United States)

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  20. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements

    Directory of Open Access Journals (Sweden)

    Billur Barshan

    2008-12-01

    Full Text Available An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  1. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  2. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  3. Analysis of Hardened Depth Variability, Process Potential, and Measurement Error in Case Carburized Components

    Science.gov (United States)

    Rowan, Olga K.; Keil, Gary D.; Clements, Tom E.

    2014-12-01

    Hardened depth (effective case depth) measurement is one of the most commonly used methods for carburizing performance evaluation. Variation in direct hardened depth measurements is routinely assumed to represent the heat treat process variation without properly correcting for the large uncertainty frequently observed in industrial laboratory measurements. These measurement uncertainties may also invalidate application of statistical control requirements on hardened depth. Gage R&R studies were conducted at three different laboratories on shallow and deep case carburized components. The primary objectives were to understand the magnitude of the measurement uncertainty and heat treat process variability, and to evaluate practical applicability of statistical control methods to metallurgical quality assessment. It was found that ~75% of the overall hardened depth variation is attributed to the measurement error resulting from the accuracy limitation of microhardness equipment and the linear interpolation technique. The measurement error was found to be proportional to the hardened depth magnitude and may reach ~0.2 mm uncertainty at 1.3 mm nominal depth and ~0.8 mm uncertainty at 3.2mm depth. A case study was discussed to explain a methodology for analyzing a large body of hardened depth information, determination of the measurement error, and calculation of the true heat treat process variation.

  4. Measurement error of self-reported physical activity levels in New York City: assessment and correction.

    Science.gov (United States)

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-05-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.

  5. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    Energy Technology Data Exchange (ETDEWEB)

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  6. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard;

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position...... errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...

  7. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  8. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    control of vibratory power transmission into structures. This is demonstrated by computer simulations using a theoretical model of a beam structure which is driven by one primary source and two control sources. These simulations reveal the influence of residual errors on power measurements......, and the limitations imposed in active control of structural vibration based upon a strategy of power minimisation....

  9. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    Science.gov (United States)

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  10. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between t

  11. BIAS ERRORS INDUCED BY CONCENTRATION GRADIENT IN SEDIMENT-LADEN FLOW MEASUREMENT WITH PTV

    Institute of Scientific and Technical Information of China (English)

    LI Dan-xun; LIN Qiu-sheng; ZHONG Qiang; WANG Xing-kui

    2012-01-01

    Sediment-laden flow measurement with Particle Tracking Velocimetry (PTV) introduces a series of finite-sized sampling bins along the vertical of the flow.Instantaneous velocities are collected at each bin and a significantly large sample is established to evaluate mean and root mean square (rms) velocities of the flow.Due to the presence of concentration gradient,the established sample for the solid phase inv(o)lves more data from the lower part of the sampling bin than from the upper part.The concentration effect causes bias errors in the measured mean and rms velocities when velocity varies across the bin.These bias errors are analytically quantified in this study based on simplified linear velocity and concentration distributions.Typical bulk flow characteristics from sediment-laden flow measurements are used to demonstrate rough estimation of the error magnitude.Results indicate that the mean velocity is underestimated while the rms velocity is overestimated in the ensemble-averaged measurement.The extent of deviation is commensurate with the bin size and the rate of concentration gradient.Procedures are proposed to assist determining an appropriate sampling bin size in certain error limits.

  12. Empirical Bayes Test for the Parameter of Rayleigh Distribution with Error of Measurement

    Institute of Scientific and Technical Information of China (English)

    HUANG JUAN

    2011-01-01

    For the data with error of measurement in historical samples,the empirical Bayes test rule for the parameter of Rayleigh distribution is constructed,and the asymptotically optimal property is obtained.It is shown that the convergence rate of the proposed EB test rule can be arbitrarily close to O(n-1/2) under suitable conditions.

  13. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  14. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  15. Reduction of truncation errors in planar, cylindrical, and partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel

    2012-01-01

    A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...

  16. Reduction of truncation errors in partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Cano Facila, Francisco J.

    2010-01-01

    In this report, a new and effective method for reduction of truncation errors in partial spherical near-field (SNF) antenna measurements is proposed. This method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the far...

  17. Exploring Type I and Type II Errors Using Rhizopus Sporangia Diameter Measurements.

    Science.gov (United States)

    Smith, Robert A.; Burns, Gerard; Freud, Brian; Fenning, Stacy; Hoffman, Rosemary; Sabapathi, Durai

    2000-01-01

    Presents exercises in which students can explore Type I and Type II errors using sporangia diameter measurements as a means of differentiating between two species. Examines the influence of sample size and significance level on the outcome of the analysis. (SAH)

  18. High dimensional linear regression models under long memory dependence and measurement error

    Science.gov (United States)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  19. Error analysis for the ground-based microwave ozone measurements during STOIC

    Science.gov (United States)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-01-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

  20. Influenza infection rates, measurement errors and the interpretation of paired serology.

    Directory of Open Access Journals (Sweden)

    Simon Cauchemez

    Full Text Available Serological studies are the gold standard method to estimate influenza infection attack rates (ARs in human populations. In a common protocol, blood samples are collected before and after the epidemic in a cohort of individuals; and a rise in haemagglutination-inhibition (HI antibody titers during the epidemic is considered as a marker of infection. Because of inherent measurement errors, a 2-fold rise is usually considered as insufficient evidence for infection and seroconversion is therefore typically defined as a 4-fold rise or more. Here, we revisit this widely accepted 70-year old criterion. We develop a Markov chain Monte Carlo data augmentation model to quantify measurement errors and reconstruct the distribution of latent true serological status in a Vietnamese 3-year serological cohort, in which replicate measurements were available. We estimate that the 1-sided probability of a 2-fold error is 9.3% (95% Credible Interval, CI: 3.3%, 17.6% when antibody titer is below 10 but is 20.2% (95% CI: 15.9%, 24.0% otherwise. After correction for measurement errors, we find that the proportion of individuals with 2-fold rises in antibody titers was too large to be explained by measurement errors alone. Estimates of ARs vary greatly depending on whether those individuals are included in the definition of the infected population. A simulation study shows that our method is unbiased. The 4-fold rise case definition is relevant when aiming at a specific diagnostic for individual cases, but the justification is less obvious when the objective is to estimate ARs. In particular, it may lead to large underestimates of ARs. Determining which biological phenomenon contributes most to 2-fold rises in antibody titers is essential to assess bias with the traditional case definition and offer improved estimates of influenza ARs.

  1. Measuring and detecting molecular adaptation in codon usage against nonsense errors during protein translation.

    Science.gov (United States)

    Gilchrist, Michael A; Shah, Premal; Zaretzki, Russell

    2009-12-01

    Codon usage bias (CUB) has been documented across a wide range of taxa and is the subject of numerous studies. While most explanations of CUB invoke some type of natural selection, most measures of CUB adaptation are heuristically defined. In contrast, we present a novel and mechanistic method for defining and contextualizing CUB adaptation to reduce the cost of nonsense errors during protein translation. Using a model of protein translation, we develop a general approach for measuring the protein production cost in the face of nonsense errors of a given allele as well as the mean and variance of these costs across its coding synonyms. We then use these results to define the nonsense error adaptation index (NAI) of the allele or a contiguous subset thereof. Conceptually, the NAI value of an allele is a relative measure of its elevation on a specific and well-defined adaptive landscape. To illustrate its utility, we calculate NAI values for the entire coding sequence and across a set of nonoverlapping windows for each gene in the Saccharomyces cerevisiae S288c genome. Our results provide clear evidence of adaptation to reduce the cost of nonsense errors and increasing adaptation with codon position and expression. The magnitude and nature of this adaptation are also largely consistent with simulation results in which nonsense errors are the only selective force driving CUB evolution. Because NAI is derived from mechanistic models, it is both easier to interpret and more amenable to future refinement than other commonly used measures of codon bias. Further, our approach can also be used as a starting point for developing other mechanistically derived measures of adaptation such as for translational accuracy.

  2. A Bivariate Analogue to the Composed Product of Polynomials

    Institute of Scientific and Technical Information of China (English)

    Donald Mills; Kent M. Neuerburg

    2003-01-01

    The concept of a composed product for univariate polynomials has been explored extensively by Brawley, Brown, Carlitz, Gao,Mills, et al. Starting with these fundamental ideas andutilizing fractional power series representation(in particular, the Puiseux expansion) of bivariate polynomials, we generalize the univariate results. We define a bivariate composed sum,composed multiplication,and composed product (based on function composition). Further, we investigate the algebraic structure of certain classes of bivariate polynomials under these operations. We also generalize a result of Brawley and Carlitz concerningthe decomposition of polynomials into irreducibles.

  3. Compensating sampling errors in stabilizing helmet-mounted displays using auxiliary acceleration measurements

    Science.gov (United States)

    Merhav, S.; Velger, M.

    1991-01-01

    A method based on complementary filtering is shown to be effective in compensating for the image stabilization error due to sampling delays of HMD position and orientation measurements. These delays would otherwise have prevented the stabilization of the image in HMDs. The method is also shown to improve the resolution of the head orientation measurement, particularly at low frequencies, thus providing smoother head control commands, which are essential for precise head pointing and teleoperation.

  4. Entanglement Purification with Higher Error Threshold for Imperfection of Local Operations and Bell State Measurements

    Institute of Scientific and Technical Information of China (English)

    SONG Xin-Guo; FENG Xun-Li

    2005-01-01

    @@ We analyse further the entanglement purification protocol proposed by Feng et al. [Phys. Lett. A 271 (2000) 44] in the case of imperfect local operations and measurements. It is found that this protocol allows of higher error threshold. Compared with the standard entanglement purification proposed by Bennett et al. [Phys. Rev.Lett. 76 (1996) 722], it turns out that this protocol is remarkably robust against the influences of imperfect localoperations and measurements.

  5. Measurement errors in the use of smartphones as lowcost forestry hypsometers

    OpenAIRE

    Fernández López, Cristina; Villasante Plágaro, Antonio M.

    2014-01-01

    Various applications currently available for Android allow the estimation of tree heights by using the 3D accelerometer on smartphones. Some make the estimation using the image on the screen, while in others, by pointing with the edges of the terminal. The present study establishes the measurement errors obtained with HTC Desire and Samsung Galaxy Note compared to those from Blume Leiss and Vertex IV. Six series of 12 measurements each were made with each hypsometer (for heights of 6 m, 8 ...

  6. Numerical Research of the Measurement Error of Temperature Thermocouples with the Isolated Seal

    OpenAIRE

    Atroshenko Yuliana K.; Bychkova Alena A.; Strizhak Pavel A.

    2015-01-01

    Mathematical models of heattransfer are developed for an assessment of measurement errors of temperature by thermocouples with an isolated and uninsulated seal. Dependences of necessary time of heating (for authentic measurement) for the thermocouples with an isolated seal manufactured of different materials are set. It is shown that for thermocouples with isolated seal minimum necessary duration of heating up slightly exceeds this index for thermocouples with an uninsulated seal.

  7. High-accuracy current measurement with low-cost shunts by means of dynamic error correction

    OpenAIRE

    Weßkamp, Patrick; Melbert, Joachim

    2016-01-01

    Measurement of electrical current is often performed by using shunt resistors. Thermal effects due to self-heating and ambient temperature variation limit the achievable accuracy, especially if low-cost shunt resistors with increased temperature coefficients are utilized. In this work, a compensation method is presented which takes static and dynamic temperature drift effects into account and provides a significant reduction of measurement error. A thermal model of the shunt...

  8. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.

  9. Experimental study of error sources in skin-friction balance measurements

    Science.gov (United States)

    Allen, J. M.

    1977-01-01

    An experimental study has been performed to determine potential error sources in skin-friction balance measurements. A floating-element balance, large enough to contain the instrumentation needed to systematically investigate these error sources has been constructed and tested in the thick turbulent boundary layer on the sidewall of a large supersonic wind tunnel. Test variables include element-to-case misalignment, gap size, and Reynolds number. The effects of these variables on the friction, lip, and normal forces have been analyzed. It was found that larger gap sizes were preferable to smaller ones; that small element recession below the surrounding test surface produced errors comparable to the same amount of protrusion above the test surface; and that normal forces on the element were, in some cases, large compared to the friction force.

  10. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    Energy Technology Data Exchange (ETDEWEB)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. (Univ. of Florida College of Medicine, Gainesville (USA))

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.

  11. SHAPE-PRESERVING BIVARIATE POLYNOMIAL APPROXIMATION IN C([-1,1]×[-1,1])

    Institute of Scientific and Technical Information of China (English)

    Sorin G. Gal

    2002-01-01

    In this paper we construct bivariate polynomials attached to a bivariate function, that approximate with Jackson-type rate involving a bivariate Ditzian-Totik ω2-modulus of smoothness and preserve some natural kinds of bivariate monotonicity and convexity of function.The result extends that in univariate case-of D. Leviatan in [5-6], improves that in bivariate case of the author in [3] and in some special cases, that in bivariate case of G. Anastassiou in [1].

  12. A simple powerful bivariate test for two sample location problems in experimental and observational studies

    Directory of Open Access Journals (Sweden)

    Ayatollahi S MT

    2010-05-01

    Full Text Available Abstract Background In many areas of medical research, a bivariate analysis is desirable because it simultaneously tests two response variables that are of equal interest and importance in two populations. Several parametric and nonparametric bivariate procedures are available for the location problem but each of them requires a series of stringent assumptions such as specific distribution, affine-invariance or elliptical symmetry. The aim of this study is to propose a powerful test statistic that requires none of the aforementioned assumptions. We have reduced the bivariate problem to the univariate problem of sum or subtraction of measurements. A simple bivariate test for the difference in location between two populations is proposed. Method In this study the proposed test is compared with Hotelling's T2 test, two sample Rank test, Cramer test for multivariate two sample problem and Mathur's test using Monte Carlo simulation techniques. The power study shows that the proposed test performs better than any of its competitors for most of the populations considered and is equivalent to the Rank test in specific distributions. Conclusions Using simulation studies, we show that the proposed test will perform much better under different conditions of underlying population distribution such as normality or non-normality, skewed or symmetric, medium tailed or heavy tailed. The test is therefore recommended for practical applications because it is more powerful than any of the alternatives compared in this paper for almost all the shifts in location and in any direction.

  13. Nonlinear analysis of cylindrical capacitive sensor used for measuring high precision spindle rotation errors

    Science.gov (United States)

    Xiang, Kui; Wang, Wen; Zhang, Min; Lu, Keqing; Fan, Zongwei; Chen, Zichen

    2015-02-01

    A novel cylindrical capacitive sensor (CCS) with differential, symmetrical and integrated structure was proposed to measure multi-degree-of-freedom rotation errors of high precision spindle simultaneously and to reduce impacts of multiple sensors installation errors on the measurement accuracy. The nonlinear relationship between the output capacitance of CCS and the radial gap was derived using the capacitance formula and was quantitatively analyzed. It was found through analysis that the thickness of curved electrode plates led to the existence of fringe effect. The influence of the fringe effect on the output capacitance was investigated through FEM simulation. It was found through analysis and simulation that the CCS could be optimized to improve the measurement accuracy.

  14. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  15. Recursive Numerical Evaluation of the Cumulative Bivariate Normal Distribution

    CERN Document Server

    Meyer, Christian

    2010-01-01

    We propose an algorithm for evaluation of the cumulative bivariate normal distribution, building upon Marsaglia's ideas for evaluation of the cumulative univariate normal distribution. The algorithm is mathematically transparent, delivers competitive performance and can easily be extended to arbitrary precision.

  16. Cumulative Incidence Association Models for Bivariate Competing Risks Data.

    Science.gov (United States)

    Cheng, Yu; Fine, Jason P

    2012-03-01

    Association models, like frailty and copula models, are frequently used to analyze clustered survival data and evaluate within-cluster associations. The assumption of noninformative censoring is commonly applied to these models, though it may not be true in many situations. In this paper, we consider bivariate competing risk data and focus on association models specified for the bivariate cumulative incidence function (CIF), a nonparametrically identifiable quantity. Copula models are proposed which relate the bivariate CIF to its corresponding univariate CIFs, similarly to independently right censored data, and accommodate frailty models for the bivariate CIF. Two estimating equations are developed to estimate the association parameter, permitting the univariate CIFs to be estimated either parametrically or nonparametrically. Goodness-of-fit tests are presented for formally evaluating the parametric models. Both estimators perform well with moderate sample sizes in simulation studies. The practical use of the methodology is illustrated in an analysis of dementia associations.

  17. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  18. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements

    Science.gov (United States)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  19. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.

    Science.gov (United States)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  20. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  1. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    Science.gov (United States)

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  2. Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.

    Science.gov (United States)

    Tang, Jaw-Luen; Wang, Jian-Neng

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.

  3. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  4. THE ASYMPTOTIC DISTRIBUTIONS OF EMPIRICAL LIKELIHOOD RATIO STATISTICS IN THE PRESENCE OF MEASUREMENT ERROR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.

  5. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  6. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep.

    Science.gov (United States)

    Crainiceanu, Ciprian M; Caffo, Brian S; Di, Chong-Zhi; Punjabi, Naresh M

    2009-06-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS.

  7. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    . In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction......Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per...... sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons...

  8. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  9. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    Science.gov (United States)

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-07

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  10. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  11. Investigating the error budget of tropical rainfall accumulations derived from combined passive microwave and infrared satellite measurements

    Science.gov (United States)

    Roca, R.; Chambon, P.; jobard, I.; Viltard, N.

    2012-04-01

    Measuring rainfall requires a high density of observations, which, over the whole tropical elt, can only be provided from space. For several decades, the availability of satellite observations has greatly increased; thanks to newly implemented missions like the Megha-Tropiques mission and the forthcoming GPM constellation, measurements from space become available from a set of observing systems. In this work, we focus on rainfall error estimations at the 1 °/1-day accumulated scale, key scale of meteorological and hydrological studies. A novel methodology for quantitative precipitation estimation is introduced; its name is TAPEER (Tropical Amount of Precipitation with an Estimate of ERrors) and it aims to provide 1 °/1-day rain accumulations and associated errors over the whole Tropical belt. This approach is based on a combination of infrared imagery from a fleet of geostationary satellites and passive microwave derived rain rates from a constellation of low earth orbiting satellites. A three-stage disaggregation of error into sampling, algorithmic and calibration errors is performed; the magnitudes of the three terms are then estimated separately. A dedicated error model is used to evaluate sampling errors and a forward error propagation approach is used for an estimation of algorithmic and calibration errors. One of the main findings in this study is the large contribution of the sampling errors and the algorithmic errors of BRAIN on medium rain rates (2 mm h-1 to 10 mm h-1) in the total error budget.

  12. Stochastic thermodynamics based on incomplete information: generalized Jarzynski equality with measurement errors with or without feedback

    Science.gov (United States)

    Wächtler, Christopher W.; Strasberg, Philipp; Brandes, Tobias

    2016-11-01

    In the derivation of fluctuation relations, and in stochastic thermodynamics in general, it is tacitly assumed that we can measure the system perfectly, i.e., without measurement errors. We here demonstrate for a driven system immersed in a single heat bath, for which the classic Jarzynski equality =1 holds, how to relax this assumption. Based on a general measurement model akin to Bayesian inference we derive a general expression for the fluctuation relation of the measured work and we study the case of an overdamped Brownian particle and of a two-level system in particular. We then generalize our results further and incorporate feedback in our description. We show and argue that, if measurement errors are fully taken into account by the agent who controls and observes the system, the standard Jarzynski-Sagawa-Ueda relation should be formulated differently. We again explicitly demonstrate this for an overdamped Brownian particle and a two-level system where the fluctuation relation of the measured work differs significantly from the efficacy parameter introduced by Sagawa and Ueda. Instead, the generalized fluctuation relation under feedback control, =1, holds only for a superobserver having perfect access to both the system and detector degrees of freedom, independently of whether or not the detector yields a noisy measurement record and whether or not we perform feedback.

  13. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  14. Mapping of error cells in clinical measure to symmetric power space.

    Science.gov (United States)

    Abelman, H; Abelman, S

    2007-09-01

    During the refraction procedure, the power of the nearest equivalent sphere lens, known as the scalar power, is conserved within upper and lower bounds in the sphere (and cylinder) lens powers. Bounds are brought closer together while keeping the circle of least confusion on the retina. The sphere and cylinder powers and changes in these powers are thus dependent. Changes are depicted in the cylinder-sphere plane by error cells with one pair of parallel sides of negative gradient and the other pair aligned with the graph axis of cylinder power. Scalar power constitutes a vector space, is a meaningful ophthalmic quantity and is represented by the semi-trace of the dioptric power matrix. The purpose of this article is to map to error cells for the following: coordinates of the dioptric power matrix, its principal powers and meridians and its entries from error cells surrounding powers in sphere, cylinder and axis. Error cells in clinical measure for conserved scalar power now contain more compensatory lens powers. Such cells and their respective mappings in terms of most scientific and alternate clinical quantities now image consistently not only to the cells from where they originate but also to each other.

  15. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    Science.gov (United States)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  16. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  17. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important.

  18. Some Improved Estimators of Co-efficient of Variation from Bi-variate normal distribution: A Monte Carlo Comparison

    Directory of Open Access Journals (Sweden)

    Archana V

    2014-05-01

    Full Text Available Co-efficient of variation is a unitless measure of dispersion and is very frequently used in scientific investigations. This has motivated several researchers to propose estimators and tests concerning the co-efficient of variation of normal distribution(s. While proposing a class of estimators for the co-efficient of variation of a finite population, Tripathi et al., (2002 suggested that the estimator of co-efficient of variation of a finite population can also be used as an estimator of C.V for any distribution when the sampling design is SRSWR. This has motivated us to propose 28 estimators of finite population co-efficient of variation as estimators of co-efficient of variation of one component of a bivariate normal distribution when prior information is available regarding the second component. Cramer Rao type lower bound is derived to the mean square error of these estimators. Extensive simulation is carried out to compare these estimators. The results indicate that out of these 28 estimators, eight estimators have larger relative efficiency compared to the sample co-efficient of variation. The asymptotic mean square errors of the best estimators are derived to the order of  for the benefit of users of co-efficient of variation.

  19. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  20. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  1. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    Science.gov (United States)

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  2. Mechanistically-informed damage detection using dynamic measurements: Extended constitutive relation error

    Science.gov (United States)

    Hu, X.; Prabhu, S.; Atamturktur, S.; Cogan, S.

    2017-02-01

    Model-based damage detection entails the calibration of damage-indicative parameters in a physics-based computer model of an undamaged structural system against measurements collected from its damaged counterpart. The approach relies on the premise that changes identified in the damage-indicative parameters during calibration reveal the structural damage in the system. In model-based damage detection, model calibration has traditionally been treated as a process, solely operating on the model output without incorporating available knowledge regarding the underlying mechanistic behavior of the structural system. In this paper, the authors propose a novel approach for model-based damage detection by implementing the Extended Constitutive Relation Error (ECRE), a method developed for error localization in finite element models. The ECRE method was originally conceived to identify discrepancies between experimental measurements and model predictions for a structure in a given healthy state. Implementing ECRE for damage detection leads to the evaluation of a structure in varying healthy states and determination of discrepancy between model predictions and experiments due to damage. The authors developed an ECRE-based damage detection procedure in which the model error and structural damage are identified in two distinct steps and demonstrate feasibility of the procedure in identifying the presence, location and relative severity of damage on a scaled two-story steel frame for damage scenarios of varying type and severity.

  3. Analysis of errors in the measurement of energy dissipation with two-point LDA

    Energy Technology Data Exchange (ETDEWEB)

    Ducci, A.; Yianneskis, M. [Department of Mechanical Engineering, King' s College London, Experimental and Computational Laboratory for the Analysis of Turbulence (ECLAT), London (United Kingdom)

    2005-04-01

    In the present study, an attempt has been made to identify and quantify, with a rigorous analytical approach, all possible sources of error involved in the estimation of the fluctuating velocity gradients ({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2} when a two-point laser Doppler velocimetry (LDV) technique is employed. Measurements were carried out in a grid-generated turbulence flow where the local dissipation rate can be calculated from the decay of kinetic energy. An assessment of the cumulative error determined through the analysis has been made by comparing the values of the spatial gradients directly measured with the gradient estimated from the decay of kinetic energy. The main sources of error were found to be related to the length of the two control volumes and to the fitting range, as well as the function used to interpolate the correlation coefficient when the Taylor length scale (or({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2}) are estimated. (orig.)

  4. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    Science.gov (United States)

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  5. Systematic error investigation of the spin tune analysis for an EDM measurement at COSY

    Energy Technology Data Exchange (ETDEWEB)

    Trinkel, Fabian [Institut fuer Kernphysik, Forschungszentrum Juelich, Wilhelm-Johnen-Strasse 52428 Juelich (Germany); Collaboration: JEDI-Collaboration

    2015-07-01

    So far there have been no direct Electric Dipole Moment (EDM) measurements for charged hadrons. The goal of the JEDI collaboration (Juelich Electric Dipole moment Investigations) is to measure the EDM of charged particles (p, d and {sup 3}He). A first step on the way for an EDM measurement is the investigation of systematic errors at the storage ring COSY (COoler SYnchrotron). One part for these studies examines the spin tune ν{sub s} of a horizontally polarized deuteron beam. The spin tune is defined as the number of spin rotations in the horizontal plane relative to the particle turns. To first approximation it is given by vertical stroke ν{sub s} vertical stroke ∼ γG, where γ is the Lorentz factor and G is the anomalous magnetic moment of the particle. The spin precession is observed using elastic deuteron carbon scattering. A measurement of the spin tune is performed for a polarized deuteron beam with a precision of 10{sup -10} at COSY. The measurement and possible systematic errors due to acceptance and polarization variation are discussed.

  6. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  7. Influence of measurement errors on temperature-based death time determination.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2011-07-01

    Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.

  8. Possible systematic errors in measurements of skin colour using the EEL reflectance spectrophotometer.

    Science.gov (United States)

    Beerens, E G

    1980-01-01

    Reflectance readings with the EEL Reflectance Spectrophotometer, used in many studies of human skin colour, depend on the spatial orientation of the applicator head of the instrument. Variations of over 15% of the calibration value have been observed. The present paper shows that this orientation dependence is due to the influence of gravity on the glowing spiral of the light bulb. The effect has an electrical and an optical component. It is concluded that the orientation effect will manifest as a systematic error in skin colour studies when calibration and measurement are performed at different orientations of the applicator head. The magnitude of this systematic error may be in the order of 10% and comparisons between different studies may be inaccurate.

  9. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  10. Measurement of five-degrees-of-freedom error motions for a micro high-speed spindle using an optical technique

    Science.gov (United States)

    Murakami, Hiroshi

    2011-05-01

    We present an optical technique to measure five-degree-of-freedom error motions of a high-speed microspindle. The measurement system consists of a rod lens, a ball lens, four divided laser beams, and multiple divided photodiodes. When the spindle rotates with its concomitant rotation errors, the rod and ball lenses, which are mounted to the chuck of the spindle, are displaced, and this displacement is measured using an optical technique. In this study, the measuring system is manufactured for trial and is experimentally evaluated. The results clarify that the measuring system has a resolution of 5 nm and can be used to evaluate micro spindle rotation errors.

  11. Random errors for the measurement of central positions in white-light interferometry with the least-squares method.

    Science.gov (United States)

    Wang, Qi

    2015-08-01

    This paper analyzes the effect of random noise on the measurement of central positions of white-light correlograms with the least-squares method. Measurements of two types of central positions, the central position of the envelope (CPE) and the central position of the central fringe (CPCF), are investigated. Two types of random noise, intensity noise and position noise, are considered. Analytic expressions for random error due to intensity noise (REIN) and random error due to position noise (REPN) are derived. The theoretical results are compared with the random errors estimated from computer simulations. Random errors of CPE measurement are compared with those of CPCF measurement. Relationships are investigated between the random errors and the wavelength of the light source. The REPN of CPCF measurement has been found to be independent of the wavelength of the light source and the amplitude of the central fringe.

  12. Interp olation by Bivariate Polynomials Based on Multivariate F-truncated Powers

    Institute of Scientific and Technical Information of China (English)

    Yuan Xue-mei

    2014-01-01

    The solvability of the interpolation by bivariate polynomials based on multivariate F-truncated powers is considered in this short note. It unifies the point-wise Lagrange interpolation by bivariate polynomials and the interpolation by bivari-ate polynomials based on linear integrals over segments in some sense.

  13. Research on the influence and correction method of depth scanning error to the underwater acoustic image measurement

    Institute of Scientific and Technical Information of China (English)

    MEI Jidan; ZHAI Chunpin; WANGYilin; HUI Junying

    2011-01-01

    The technology of underwater acoustic image measurement was a passive locating method with high precision in near field. To improve the precision of underwater acoustic image measurement, the influence of the depth scan error was analyzed and the correcti

  14. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  15. Minimization of errors in narrowband laser phase noise measurements based on reference measurement channels

    CERN Document Server

    Pnev, A B; Dvoretskiy, D A; Zhirnov, A A; Nesterov, E T; Sazonkin, S G; Chernutsky, A O; Shelestov, D A; Fedorov, A K; Svelto, C; Karasik, V E

    2016-01-01

    We propose a novel scheme for laser phase noise measurements with minimized sensitivity to external fluctuations including interferometer vibration, temperature instability, other low-frequency noise, and relative intensity noise. In order to minimize the effect of these external fluctuations, we employ simultaneous measurement of two spectrally separated channels in the scheme. We present an algorithm for selection of the desired signal to extract the phase noise. Experimental results demonstrate potential of the suggested scheme for a wide range of technological applications.

  16. Measurement of Transmission Error Including Backlash in Angle Transmission Mechanisms for Mechatronic Systems

    Science.gov (United States)

    Ming, Aiguo; Kajitani, Makoto; Kanamori, Chisato; Ishikawa, Jiro

    The characteristics of angle transmission mechanisms exert a great influence on the servo performance in the robotic or mechatronic mechanism. Especially, the backlash of angle transmission mechanism is preferable the small amount. Recently, some new types of gear reducers with non-backlash have been developed for robots. However, the measurement and evaluation method of the backlash of gear trains has not been considered except old methods which can statically measure at only several meshing points of gears. This paper proposes an overall performance testing method of angle transmission mechanisms for the mechatronic systems. This method can measure the angle transmission error both clockwise and counterclockwise. In addition the backlash can be continuously measured in all meshing positions automatically. This system has been applied to the testing process in the production line of gear reducers for robots, and it has been effective for reducing the backlash of the gear trains.

  17. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    Science.gov (United States)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  18. Partial continuation model and its application in mitigating systematic errors of double-differenced GPS measurements

    Institute of Scientific and Technical Information of China (English)

    GUO Jianfeng; OU Jikun; REN Chao

    2005-01-01

    Based on the so-called partial continuation model with exact finite measurements, a new stochastic assessment procedure is introduced. For every satellite pair, the temporal correlation coefficient is estimated using the original double-differenced (DD) GPS measurements. And then, the Durbin-Watson test is applied to test specific hypothesis on the temporal correlation coefficient. Unless the test is not significant with a certain significant level, a data transformation is required. These transformed measurements are free of time correlations. For purpose of illustration, two static GPS baseline data sets are analyzed in detail. The experimental results demonstrate that the proposed procedure can mitigate effectively the impact of systematic errors on DD GPS measurements.

  19. Low-error and broadband microwave frequency measurement in a silicon chip

    CERN Document Server

    Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David

    2015-01-01

    Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.

  20. A Study on Measurement Error during Alternating Current Induced Voltage Tests on Large Transformers

    Institute of Scientific and Technical Information of China (English)

    WANG Xuan; LI Yun-ge; CAO Xiao-long; LIU Ying

    2006-01-01

    The large transformer is pivotal equipment in an electric power supply system; Its partial discharge test and the induced voltage withstand test on large transformers are carried out at a frequency about twice the working frequency. If the magnetizing inductance cannot compensate for the stray capacitance, the test sample turns into a capacitive load and a capacitive rise exhibits in the testing circuit. For self-restoring insulation, a method has been recommended in IEC60-1 that an unapproved measuring system be calibrated by an approved system at a voltage not less than 50% of the rated testing voltage, and the result then be extrapolated linearly. It has been found that this method leads to great error due to the capacitive rise if it is not correctly used during a withstand voltage test under certain testing conditions, especially for a test on high voltage transformers with large capacity. Since the withstand voltage test is the most important means to examine the operation reliability of a transformer, and it can be destructive to the insulation, a precise measurement must be guaranteed. In this paper a factor, named as the capacitive rise factor, is introduced to assess the rise. The voltage measurement error during the calibration is determined by the parameters of the test sample and the testing facilities, as well as the measuring point. Based on theoretical analysis in this paper, a novel method is suggested and demonstrated to estimate the error by using the capacitive rise factor and other known parameters of the testing circuit.

  1. Noise and measurement errors in a practical two-state quantum bit commitment protocol

    Science.gov (United States)

    Loura, Ricardo; Almeida, Álvaro J.; André, Paulo S.; Pinto, Armando N.; Mateus, Paulo; Paunković, Nikola

    2014-05-01

    We present a two-state practical quantum bit commitment protocol, the security of which is based on the current technological limitations, namely the nonexistence of either stable long-term quantum memories or nondemolition measurements. For an optical realization of the protocol, we model the errors, which occur due to the noise and equipment (source, fibers, and detectors) imperfections, accumulated during emission, transmission, and measurement of photons. The optical part is modeled as a combination of a depolarizing channel (white noise), unitary evolution (e.g., systematic rotation of the polarization axis of photons), and two other basis-dependent channels, namely the phase- and bit-flip channels. We analyze quantitatively the effects of noise using two common information-theoretic measures of probability distribution distinguishability: the fidelity and the relative entropy. In particular, we discuss the optimal cheating strategy and show that it is always advantageous for a cheating agent to add some amount of white noise—the particular effect not being present in standard quantum security protocols. We also analyze the protocol's security when the use of (im)perfect nondemolition measurements and noisy or bounded quantum memories is allowed. Finally, we discuss errors occurring due to a finite detector efficiency, dark counts, and imperfect single-photon sources, and we show that the effects are the same as those of standard quantum cryptography.

  2. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    Science.gov (United States)

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-01

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  3. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-01

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  4. Reliability, technical error of measurements and validity of length and weight measurements for children under two years old in Malaysia.

    Science.gov (United States)

    Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B

    2010-06-01

    The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity.

  5. Dynamic error research and application of an angular measuring system belonging to a high precision excursion test turntable

    Institute of Scientific and Technical Information of China (English)

    DENG Hui-yu; WANG Xin-li; MA Pei-sun

    2006-01-01

    Angular measuring system is the most important component of a servo turntable in inertial test apparatus. Its function and precision determine the turntable's function and precision. It attaches importance to research on inertial test equipment. This paper introduces the principle of the angular measuring system using amplitude discrimination mode. The dynamic errors are analyzed from the aspects of inductosyn, amplitude and function error of double-phase voltage and waveform distortion. Through detailed calculation, theory is provided for practical application; system errors are allocated and the angular measuring system meets the accuracy requirement. As a result, the schedule of the angular measuring system can be used in practice.

  6. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    Science.gov (United States)

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  7. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient

    Science.gov (United States)

    Hindasageri, V.; Vedula, R. P.; Prabhu, S. V.

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.

  8. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient.

    Science.gov (United States)

    Hindasageri, V; Vedula, R P; Prabhu, S V

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.

  9. Measurement errors related to contact angle analysis of hydrogel and silicone hydrogel contact lenses.

    Science.gov (United States)

    Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole

    2009-11-01

    This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.

  10. Overview of Measuring Effect Sizes: The Effect of Measurement Error. Brief 2

    Science.gov (United States)

    Boyd, Don; Grossman, Pam; Lankford, Hamp; Loeb, Susanna; Wyckoff, Jim

    2008-01-01

    The use of value-added models in education research has expanded rapidly. These models allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. An important question is whether such effects are sufficiently large to achieve various policy goals. Judging whether a change in…

  11. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    Science.gov (United States)

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the variable…

  12. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  13. First measurements of error fields on W7-X using flux surface mapping

    Science.gov (United States)

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn; the W7-X Team

    2016-10-01

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field ‘{\\rlap- \\iota} =1/2 ’ magnetic configuration ({\\rlap- \\iota} =\\iota /2π ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m  =  1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m  =  5/10 island chain should be present. Modeling indicates that if an n  =  1 perturbing field is applied by the trim coils, a large n/m  =  1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small ∼ 0.04 m intrinsic island chain with a {{130}\\circ} phase relative to the first module of the W7-X experiment. These error fields are determined to be small and easily correctable by the trim coil system. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  14. Research on Proximity Magnetic Field Influence in Measuring Error of Active Electronic Current Transformers

    Directory of Open Access Journals (Sweden)

    Wu Weijiang

    2016-01-01

    Full Text Available The principles of the active electronic current transformer (ECT are introduced, and the mechanism of how a proximity magnetic field can influence the measuring of errors is analyzed from the perspective of the sensor section of the ECT. The impacts on active ECTs created by three-phase proximity magnetic field with invariable distance and variable distance are simulated and analyzed. The theory and simulated analysis indicate that the active ECTs are sensitive to proximity magnetic field under certain conditions. According to simulated analysis, a product structural design and the location of transformers at substation sites are suggested for manufacturers and administration of power supply, respectively.

  15. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  16. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

  17. Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations.

    Science.gov (United States)

    Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing

    2017-01-25

    Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO₂ emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30-40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO₂ and NO₂ emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.

  18. Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations

    Directory of Open Access Journals (Sweden)

    Fengcheng Wu

    2017-01-01

    Full Text Available Mobile differential optical absorption spectroscopy (mobile DOAS is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO2 emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS under different drive speeds and wind fields revealed that the optimal drive velocity is 30–40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO2 and NO2 emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy.

  19. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.

    Science.gov (United States)

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-10-12

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.

  20. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  1. Emission Flux Measurement Error with a Mobile DOAS System and Application to NOx Flux Observations

    Science.gov (United States)

    Wu, Fengcheng; Li, Ang; Xie, Pinhua; Chen, Hao; Hu, Zhaokun; Zhang, Qiong; Liu, Jianguo; Liu, Wenqing

    2017-01-01

    Mobile differential optical absorption spectroscopy (mobile DOAS) is an optical remote sensing method that can rapidly measure trace gas emission flux from air pollution sources (such as power plants, industrial areas, and cities) in real time. Generally, mobile DOAS is influenced by wind, drive velocity, and other factors, especially in the usage of wind field when the emission flux in a mobile DOAS system is observed. This paper presents a detailed error analysis and NOx emission with mobile DOAS system from a power plant in Shijiazhuang city, China. Comparison of the SO2 emission flux from mobile DOAS observations with continuous emission monitoring system (CEMS) under different drive speeds and wind fields revealed that the optimal drive velocity is 30–40 km/h, and the wind field at plume height is selected when mobile DOAS observations are performed. In addition, the total errors of SO2 and NO2 emissions with mobile DOAS measurements are 32% and 30%, respectively, combined with the analysis of the uncertainties of column density, wind field, and drive velocity. Furthermore, the NOx emission of 0.15 ± 0.06 kg/s from the power plant is estimated, which is in good agreement with that from CEMS observations of 0.17 ± 0.07 kg/s. This study has significantly contributed to the measurement of the mobile DOAS system on emission from air pollution sources, thus improving estimation accuracy. PMID:28125054

  2. Quantum Steering Inequality with Tolerance for Measurement-Setting Errors: Experimentally Feasible Signature of Unbounded Violation

    Science.gov (United States)

    Rutkowski, Adam; Buraczewski, Adam; Horodecki, Paweł; Stobińska, Magdalena

    2017-01-01

    Quantum steering is a relatively simple test for proving that the values of quantum-mechanical measurement outcomes come into being only in the act of measurement. By exploiting quantum correlations, Alice can influence—steer—Bob's physical system in a way that is impossible in classical mechanics, as shown by the violation of steering inequalities. Demonstrating this and similar quantum effects for systems of increasing size, approaching even the classical limit, is a long-standing challenging problem. Here, we prove an experimentally feasible unbounded violation of a steering inequality. We derive its universal form where tolerance for measurement-setting errors is explicitly built in by means of the Deutsch-Maassen-Uffink entropic uncertainty relation. Then, generalizing the mutual unbiasedness, we apply the inequality to the multisinglet and multiparticle bipartite Bell state. However, the method is general and opens the possibility of employing multiparticle bipartite steering for randomness certification and development of quantum technologies, e.g., random access codes.

  3. Constraining uncertainty in the prediction of pollutant transport in rivers allowing for measurement error.

    Science.gov (United States)

    Smith, P.; Beven, K.; Blazkova, S.; Merta, L.

    2003-04-01

    This poster outlines a methodology for the estimation of parameters in an Aggregated Dead Zone (ADZ) model of pollutant transport, by use of an example reach of the River Elbe. Both tracer and continuous water quality measurements are analysed to investigate the relationship between discharge and advective time delay. This includes a study of the effects of different error distributions being applied to the measurement of both variables using Monte-Carlo Markov Chain (MCMC) techniques. The derived relationships between discharge and advective time delay can then be incorporated into the formulation of the ADZ model to allow prediction of pollutant transport given uncertainty in the parameter values. The calibration is demonstrated in a hierarchical framework, giving the potential for the selection of appropriate model structures for the change in transport characteristics with discharge in the river. The value of different types and numbers of measurements are assessed within this framework.

  4. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...

  5. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...

  6. Minimizing errors in phase change correction measurements for gauge blocks using a spherical contact technique

    Science.gov (United States)

    Stoup, John R.; Faust, Bryon S.; Doiron, Theodore D.

    1998-09-01

    One of the most elusive measurement elements in gage block interferometry is the correction for the phase change on reflection. Techniques used to quantify this correction have improved over the year, but the measurement uncertainty has remained relatively constant because some error sources have proven historically difficult to reduce. The precision engineering division at the National Institute of Standards and Technology has recently developed a measurement technique that can quantify the phase change on reflection correction directly for individual gage blocks and eliminates some of the fundamental problems with historical measurement methods. Since only the top surface of the gage block is used in the measurement, wringing film inconsistencies are eliminated with this technique thereby drastically reducing the measurement uncertainty for the correction. However, block geometry and thermal issues still exist. This paper will describe the methods used to minimize the measurement uncertainty of the phase change on reflection evaluation using a spherical contact technique. The work focuses on gage block surface topography and drift eliminating algorithms for the data collection. The extrapolation of the data to an undeformed condition and the failure of these curves to follow theoretical estimates are also discussed. The wavelength dependence of the correction was directly measured for different gage block materials and manufacturers and the data will be presented.

  7. BIVARIATE LAGRANGE-TYPE VECTOR VALUED RATIONAL INTERPOLANTS

    Institute of Scientific and Technical Information of China (English)

    Chuan-qing Gu; Gong-qing Zhu

    2002-01-01

    An axiomatic definition to bivariate vector valued rational interpolation on distinct plane interpolation points is at first presented in this paper. A two-variable vector valued rational interpolation formula is explicitly constructed in the following form: the determinantal formulas for denominator scalar polynomials and for numerator vector polynomials,which possess Lagrange-type basic function expressions. A practical criterion of existence and uniqueness for interpolation is obtained. In contrast to the underlying method, the method of bivariate Thiele-type vector valued rational interpolation is reviewed.

  8. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data

    Directory of Open Access Journals (Sweden)

    George O. Agogo

    2016-10-01

    Full Text Available Abstract Background Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. Methods We proposed a method to adjust for the bias in the diet-disease association (hereafter, association, due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV intakes, cigarette smoking (confounder and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Results Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. Conclusions The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  9. Sieve estimation of constant and time-varying coefficients in nonlinear ordinary differential equation models by considering both numerical error and measurement error

    CERN Document Server

    Xue, Hongqi; Wu, Hulin; 10.1214/09-AOS784

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge--Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the $p$-order numerical algorithm goes to zero at a rate faster than $n^{-1/(p\\wedge4)}$, the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we h...

  10. Voltage- and space-clamp errors associated with the measurement of electrotonically remote synaptic events.

    Science.gov (United States)

    Spruston, N; Jaffe, D B; Williams, S H; Johnston, D

    1993-08-01

    1. The voltage- and space-clamp errors associated with the use of a somatic electrode to measure current from dendritic synapses are evaluated using both equivalent-cylinder and morphologically realistic models of neuronal dendritic trees. 2. As a first step toward understanding the properties of synaptic current distortion under voltage-clamp conditions, the attenuation of step and sinusoidal voltage changes are evaluated in equivalent cylinder models. Demonstration of the frequency-dependent attenuation of voltage in the cable is then used as a framework for understanding the distortion of synaptic currents generated at sites remote from the somatic recording electrode and measured in the voltage-clamp recording configuration. 3. Increases in specific membrane resistivity (Rm) are shown to reduce steady-state voltage attenuation, while producing only minimal reduction in attenuation of transient voltage changes. Experimental manipulations that increase Rm therefore improve the accuracy of estimates of reversal potential for electrotonically remote synapses, but do not significantly reduce the attenuation of peak current. In addition, increases in Rm have the effect of slowing the kinetics of poorly clamped synaptic currents. 4. The effects of the magnitude of the synaptic conductance and its kinetics on the measured synaptic currents are also examined and discussed. The error in estimating parameters from measured synaptic currents is greatest for synapses with fast kinetics and large conductances. 5. A morphologically realistic model of a CA3 pyramidal neuron is used to demonstrate the generality of the conclusions derived from equivalent cylinder models. The realistic model is also used to fit synaptic currents generated by stimulation of mossy fiber (MF) and commissural/associational (C/A) inputs to CA3 neurons and to estimate the amount of distortion of these measured currents. 6. Anatomic data from the CA3 pyramidal neuron model are used to construct a

  11. Exposure Measurement Error in PM2.5 Health Effects Studies: A Pooled Analysis of Eight Personal Exposure Validation Studies

    Science.gov (United States)

    Background: Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typ...

  12. A vector of quarters representation for bivariate time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1995-01-01

    textabstractIn this paper it is shown that several models for a bivariate nonstationary quarterly time series are nested in a vector autoregression with cointegration restrictions for the eight annual series of quarterly observations. Or, the Granger Representation Theorem is extended to incorporate

  13. Bivariate support of forward libor and swap rates

    NARCIS (Netherlands)

    Jamshidian, Farshid

    2008-01-01

    Based on a certain notion of "prolific process," we find an explicit expression for the bivariate (topological) support of the solution to a particular class of 2 × 2 stochastic differential equations that includes those of the three-period "lognormal" Libor and swap market models. This yields that

  14. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    Science.gov (United States)

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has

  15. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  16. Measuring errors and violations on the road: a bifactor modeling approach to the Driver Behavior Questionnaire.

    Science.gov (United States)

    Rowe, Richard; Roman, Gabriela D; McKenna, Frank P; Barker, Edward; Poulter, Damian

    2015-01-01

    The Driver Behavior Questionnaire (DBQ) is a self-report measure of driving behavior that has been widely used over more than 20 years. Despite this wealth of evidence a number of questions remain, including understanding the correlation between its violations and errors sub-components, identifying how these components are related to crash involvement, and testing whether a DBQ based on a reduced number of items can be effective. We address these issues using a bifactor modeling approach to data drawn from the UK Cohort II longitudinal study of novice drivers. This dataset provides observations on 12,012 drivers with DBQ data collected at .5, 1, 2 and 3 years after passing their test. A bifactor model, including a general factor onto which all items loaded, and specific factors for ordinary violations, aggressive violations, slips and errors fitted the data better than correlated factors and second-order factor structures. A model based on only 12 items replicated this structure and produced factor scores that were highly correlated with the full model. The ordinary violations and general factor were significant independent predictors of crash involvement at 6 months after starting independent driving. The discussion considers the role of the general and specific factors in crash involvement.

  17. Estimating Usual Dietary Intake Distributions: Adjusting for Measurement Error and Nonnormality in 24-Hour Food Intake Data

    OpenAIRE

    Nusser, Sarah M.; Fuller, Wayne A.; Guenther, Patricia M.

    1995-01-01

    The authors have developed a method for estimating the distribution of an unobservable random variable from data that are subject to considerable measurement error and that arise from a mixture of two populations, one having a single-valued distribution and the other having a continuous unimodal distribution. The method requires that at least two positive intakes be recorded for a subset of the subjects in order to estimate the variance components for the measurement error model. Published in...

  18. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main

  19. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  20. On limit relations between some families of bivariate hypergeometric orthogonal polynomials

    Science.gov (United States)

    Area, I.; Godoy, E.

    2013-01-01

    In this paper we deal with limit relations between bivariate hypergeometric polynomials. We analyze the limit relation from trinomial distribution to bivariate Gaussian distribution, obtaining the limit transition from the second-order partial difference equation satisfied by bivariate hypergeometric Kravchuk polynomials to the second-order partial differential equation verified by bivariate hypergeometric Hermite polynomials. As a consequence the limit relation between both families of orthogonal polynomials is established. A similar analysis between bivariate Hahn and bivariate Appell orthogonal polynomials is also presented.

  1. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  2. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    CERN Document Server

    Whitmore, J B

    2014-01-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10\\,m\\,s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100\\,\\AA) and entire spectrographs arms ($\\sim$1000--3000\\,\\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200\\,m\\,s$^{-1}$\\,per 1000\\,\\AA. We apply a simple model of these distortions to simulated spectra which characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the ...

  3. Measurement error robustness of a closed-loop minimal sampling method for HIV therapy switching.

    Science.gov (United States)

    Cardozo, E Fabian; Zurakowski, Ryan

    2011-01-01

    We test the robustness of a closed-loop treatment scheduling method to realistic HIV viral load measurement error. The purpose of the algorithm is to allow the accurate detection of an induced viral load minimum with a reduced number of samples. Therapy must be switched at or near the viral-load minimum to achieve optimal therapeutic benefit; therapeutic benefit decreases logarithmically with increased viral load at the switching time. The performance of the algorithm is characterized using a number of metrics. These include the number of samples saved vs. fixed-rate sampling, the risk-reduction achieved vs. the risk-reduction possible with frequent sampling, and the difference between the switching time vs. the theoretical optimal switching time. The algorithm is applied to simulated patient data generated from a family of data-driven patient models and corrupted by experimentally confirmed levels of log-normal noise.

  4. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  5. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    Directory of Open Access Journals (Sweden)

    Julio Illade-Quinteiro

    2015-02-01

    Full Text Available Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution.

  6. Assessment of error in aerosol optical depth measured by AERONET due to aerosol forward scattering

    Science.gov (United States)

    Sinyuk, Alexander; Holben, Brent N.; Smirnov, Alexander; Eck, Thomas F.; Slutsker, Ilya; Schafer, Joel S.; Giles, David M.; Sorokin, Mikhail

    2012-12-01

    We present an analysis of the effect of aerosol forward scattering on the accuracy of aerosol optical depth (AOD) measured by CIMEL Sun photometers. The effect is quantified in terms of AOD and solar zenith angle using radiative transfer modeling. The analysis is based on aerosol size distributions derived from multi-year climatologies of AERONET aerosol retrievals. The study shows that the modeled error is lower than AOD calibration uncertainty (0.01) for the vast majority of AERONET level 2 observations, ∼99.53%. Only ∼0.47% of the AERONET database corresponding mostly to dust aerosol with high AOD and low solar elevations has larger biases. We also show that observations with extreme reductions in direct solar irradiance do not contribute to level 2 AOD due to low Sun photometer digital counts below a quality control cutoff threshold.

  7. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  8. Surface curvature of pelvic joints from three laser scanners: separating anatomy from measurement error.

    Science.gov (United States)

    Villa, Chiara; Gaudio, Daniel; Cattaneo, Cristina; Buckberry, Jo; Wilson, Andrew S; Lynnerup, Niels

    2015-03-01

    Recent studies have reported that quantifying symphyseal and auricular surface curvature changes on 3D models acquired by laser scanners has a potential for age estimation. However, no tests have been carried out to evaluate the repeatability of the results between different laser scanners. 3D models of the two pelvic joints were generated using three laser scanners (Custom, Faro, and Minolta). The surface curvature, the surface area, and the distance between co-registered meshes were investigated. Close results were found for surface areas (differences between 0.3% and 2.4%) and for distance deviations (average laser scanners, but still showing similar trends with increasing phases/scores. Applying a smoothing factor to the 3D models, it was possible to separate anatomy from the measurement error of each instrument, so that similar curvature values could be obtained (p laser scanner.

  9. High Accuracy On-line Measurement Method of Motion Error on Machine Tools Straight-going Parts

    Institute of Scientific and Technical Information of China (English)

    苏恒; 洪迈生; 魏元雷; 李自军

    2003-01-01

    Harmonic suppression, non-periodic and non-closing in straightness profile error that will bring about harmonic component distortion in measurement result are analyzed. The countermeasure-a novel accurate two-probe method in time domain is put forward to measure straight-going component motion error in machine tools based on the frequency domain 3-point method after symmetrical continuation of probes' primitive signal. Both straight-going component motion error in machine tools and the profile error in workpiece that is manufactured on this machine can be measured at the same time. The information is available to diagnose the fault origin of machine tools. The analysis result is proved to be correct by the experiment.

  10. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    1999-05-06

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity to error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.

  11. Error Analysis of Three Degree-of-Freedom Changeable Parallel Measuring Mechanism

    Institute of Scientific and Technical Information of China (English)

    CHENG Gang; GE Shi-rong; WANG Yong

    2007-01-01

    A three degree-of-freedom (DOF) planar changeable parallel mechanism is designed by means of control of different drive parameters. This mechanism possesses the characteristics of two kinds of parallel mechanism. Based on its topologic structure, a coordinate system for position analysis is set-up and the forward kinematic solutions are analyzed. It was found that the parallel mechanism is partially decoupled. The relationship between original errors and position-stance error of moving platform is built according to the complete differential-coefficient theory. Then we present a special example with theory values and errors to evaluate the error model, and numerical error solutions are gained.The investigations concentrating on mechanism errors and actuator errors show that the mechanism errors have more influences on the position-stance of the moving platform. It is demonstrated that improving manufacturing and assembly techniques can greatly reduce the moving platform error. The small change in position-stance error in different kinematic positions proves that the error-compensation of software can improve considerably the precision of parallel mechanism.

  12. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    Science.gov (United States)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  13. Reduction of Truncation Errors in Planar, Cylindrical, and Partial Spherical Near-Field Antenna Measurements

    Directory of Open Access Journals (Sweden)

    Francisco José Cano-Fácila

    2012-01-01

    Full Text Available A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward hemisphere. The extension of the valid region is achieved by the iterative application of a transformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave spectrum (PWS is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method.

  14. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  15. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    Science.gov (United States)

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  16. THE INVERSE PROBLEM OF OPTIMAL ONESTEP AND MULTI-STEP FILTERING OF MEASUREMENT ERRORS IN THE VECTOR

    Directory of Open Access Journals (Sweden)

    Laipanova Z. M.

    2015-12-01

    Full Text Available In practice, we often encounter the problem of determining a system state based on results of various measurements. Measurements are usually accompanied by random errors; therefore, we should not talk about the definition of the system state but its estimation through stochastic processing of measurement results. In the monograph by E. A. Semenchina and M. Z. Laipanova [1] it was investigated for one-step filtering of the measurement errors of the vector of demand in balance model of Leontiev, as well as multistage optimal filtering of measurement errors of the vector of demand. In this article, we have delivered and investigated the inverse problem for the optimal one-step and multi-step filtering of the measurement errors of the vector of demand. For its solution, the authors propose the method of conditional optimization and using given and known disturbance to determine (estimate the matrix elements for one-step filtering of measurement errors and for multi-stage filtration: for given variables and known disturbance to determine the elements of the matrix. The solution of the inverse problem is reduced to the solution of constrained optimization problems, which is easily determined using in MS Excel. The results of the research have been outlined in this article, they are of considerable interest in applied researches. The article also formulated and the proposed method of solution of inverse in a dynamic Leontiev model

  17. Prediction of rainfall intensity measurement errors using commercial microwave communication links

    Directory of Open Access Journals (Sweden)

    A. Zinevich

    2010-10-01

    Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.

    In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.

    The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.

    The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple

  18. THEOREMS OF PEANO'S TYPE FOR BIVARIATE FUNCTIONS AND OPTIMAL RECOVERY OF LINEAR FUNCTIONALS

    Institute of Scientific and Technical Information of China (English)

    N.K. Dicheva

    2001-01-01

    The best recovery of a linear functional Lf , f =f (x,y), on thebasis of given linear functionals Ljf ,j=1,2, … ,N in a sense of Sard has been investigated, using analogy of Peano's theorem. The best recovery of a bivariate function by given scattered data has been obtained in a simple analytical form as a special case.CLC Number:O17 Document ID:AAuthor Resume:Natasha K. Dicheva ,e-mail: dichevan_fgs@uacg, acad. bg References:[1]Rudin,W. ,Principles of Mathematical Analysis,2ed. ,McGraw-Hill Book Co. ,New York,1964.[2]Rudin,W. ,Real and Complex Analysis,McGraw-Hill publishing Co. ,New York,1976.[3]Hewitt,E. and Stromberg,K. ,Real and Abstract Analysis,Springer-Verlag,New York,Berlin,1965.[4]Lusternik,L. and Sobolev,V. ,Elements of the Functional Analysis,Izd. Nauka,Moskva,1965 (in Russian).[5]Sard,A.,Integral Representation of Remainders Duke Math. J.,15(1948),333-345.[6]Sard,A. ,Linear Approximation,Amer. Math. Soc. ,Math. Surverys,9,1963.[7]Smolyak,S.A. ,On the optimal reconvery of Functions and Functionals of Them,Ph. D. Thesis,Moscow State University,1965.[8]Nielson,G.,Bivariate Spline Functions and the Approximation of Linear Functionals,Numer.Math.,21(1973),138-160.[9]Mansfield,L.E. ,Optimal Approximations and Error Bounds in Spaces of Bivariate Functions,J. Approx. Theory 5(1972),77-96.[10]Mansfield,L.E. ,On the Optimal Approximation of Linear Functionals in Spaces of Bivariate Functions,SIAM J. Numer. Anal ,8(1971),115-126.[11]Ritter,D. ,Two Dimensional Spline Functions and best Approximation of Linear Functionals,J. Approx. Theory,3(1970),352-368.[12]Laurent,P.J. ,Approximation et optimisation,Hermann,Paris,1972.[13]Bojanov,B. ,Hakopian,H.A. and Sahakian,A.A. ,Spline Functions and Multivariate Interpolations,Kluwer Academic Publishers,Dordrecht,1993.[14]Dicheve,N.K.,On the best Recovery of Linear Functional and its Applications,Boundary Elements XXI,eds. C.A. Brebbia and H. Power,WIT Press,Southampton,Boston,(1999),739-747.Manuscript Received

  19. Method for correction of errors in observation angles for limb thermal emission measurements. [for satellite sounding of atmosphere

    Science.gov (United States)

    Abbas, M. M.; Shapiro, G. L.; Conrath, B. J.; Kunde, V. G.; Maguire, W. C.

    1984-01-01

    Thermal emission measurements of the earth's stratospheric limb from space platforms require an accurate knowledge of the observation angles for retrieval of temperature and constituent distributions. Without the use of expensive stabilizing systems, however, most observational instruments do not meet the required pointing accuracies, thus leading to large errors in the retrieval of atmospheric data. This paper describes a self-constituent method of correcting errors in pointing angles by using information contained in the observed spectrum. Numerical results based on temperature inversions of synthetic thermal emission spectra with assumed random errors in pointing angles are presented.

  20. Minimizing the Error Associated With Measurements of Migration-Related Sediment Exchange on Meandering Rivers

    Science.gov (United States)

    Lauer, J. W.; Parker, G.

    2005-05-01

    The floodplains of meandering rivers represent reservoirs that both store and release sediment. Bed material is generally released from cut banks and replaced in nearby point bars wherever migration occurs. Measuring the associated bed material flux is important for tracing the movement of contaminants that may be mixed with the bed material. Approximations of this flux can be made using a representative channel depth and sequences of aerial photography to estimate average absolute migration rates (or reworked areas) between photographs. Error in the aerial photographs leads to a positive bias in computed release rates. A method for removing this bias is introduced that uses the apparent offset of fixed linear features such as roads (along smaller rivers) or abandoned channel courses (along larger rivers). Measuring the rate of release of fine sediment is important both for predicting the long term morphodynamic evolution of the channel/floodplain system and for tracing the movement of contaminants that may be adsorbed to the fine sediment. While fine sediment can be mixed throughout the depth of the floodplain, it is most concentrated in the upper portion of older parts of the floodplain where it has had time to accumulate through overbank deposition. Its release rate can be estimated using migration rates computed from aerial photography in combination with local measurements of bank topography, both of which are highly variable even within a given reach. Where detailed bank topography is available for an entire reach, estimating the release of fine sediment is relatively straightforward. However, detailed topography is often unavailable along the banks of large lowland rivers, forcing estimates of the fine material flux to be made using a relatively small number of physically surveyed cross-sections. It is not immediately clear how many cross sections are required for a good estimate. This study performs Monte Carlo simulations on a detailed topographic dataset

  1. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  2. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  3. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  4. Non-parametric causal inference for bivariate time series

    CERN Document Server

    McCracken, James M

    2015-01-01

    We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.

  5. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.;

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature and f......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations.......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  6. [Evaluation and improvement of a measure of drug name similarity, vwhtfrag, in relation to subjective similarities and experimental error rates].

    Science.gov (United States)

    Tamaki, Hirofumi; Satoh, Hiroki; Hori, Satoko; Sawada, Yasufumi

    2012-01-01

    Confusion of drug names is one of the most common causes of drug-related medical errors. A similarity measure of drug names, "vwhtfrag", was developed to discriminate whether drug name pairs are likely to cause confusion errors, and to provide information that would be helpful to avoid errors. The aim of the present study was to evaluate and improve vwhtfrag. Firstly, we evaluated the correlation of vwhtfrag with subjective similarity or error rate of drug name pairs in psychological experiments. Vwhtfrag showed a higher correlation to subjective similarity (college students: r=0.84) or error rate than did other conventional similarity measures (htco, cos1, edit). Moreover, name pairs that showed coincidences of the initial character strings had a higher subjective similarity than those which had coincidences of the end character strings and had the same vwhtfrag. Therefore, we developed a new similarity measure (vwhtfrag+), in which coincidence of initial character strings in name pairs is weighted by 1.53 times over coincidence of end character strings. Vwhtfrag+ showed a higher correlation to subjective similarity than did unmodified vwhtfrag. Further studies appear warranted to examine in detail whether vwhtfrag+ has superior ability to discriminate drug name pairs likely to cause confusion errors.

  7. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  8. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Science.gov (United States)

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  9. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    Directory of Open Access Journals (Sweden)

    Thomas P Eisele

    Full Text Available Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  10. Analysis of the sources of error in the determination of sound power based on sound intensity measurements

    DEFF Research Database (Denmark)

    Santillan, Arturo Orozco; Jacobsen, Finn

    2010-01-01

    the resulting measurement uncertainty. The purpose of this paper is to analyze the effect of the most common sources of error in sound power determination based on sound intensity measurements. In particular the influence of the scanning procedure used in approximating the surface integral of the intensity...

  11. Bias analysis and the simulation-extrapolation method for survival data with covariate measurement error under parametric proportional odds models.

    Science.gov (United States)

    Yi, Grace Y; He, Wenqing

    2012-05-01

    It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.

  12. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2014-01-01

    Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  13. Noise Removal From Microarray Images Using Maximum a Posteriori Based Bivariate Estimator

    Directory of Open Access Journals (Sweden)

    A.Sharmila Agnal

    2013-01-01

    Full Text Available Microarray Image contains information about thousands of genes in an organism and these images are affected by several types of noises. They affect the circular edges of spots and thus degrade the image quality. Hence noise removal is the first step of cDNA microarray image analysis for obtaining gene expression level and identifying the infected cells. The Dual Tree Complex Wavelet Transform (DT-CWT is preferred for denoising microarray images due to its properties like improved directional selectivity and near shift-invariance. In this paper, bivariate estimators namely Linear Minimum Mean Squared Error (LMMSE and Maximum A Posteriori (MAP derived by applying DT-CWT are used for denoising microarray images. Experimental results show that MAP based denoising method outperforms existing denoising techniques for microarray images.

  14. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    Science.gov (United States)

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  15. Investigation of errors by radiological technologists and evaluation of preventive measures: general and mobile X-ray examinations.

    Science.gov (United States)

    Igarashi, Hiroshi; Fukushi, Masahiro; Shinoda, Naoki; Miyamoto, Akira; Hirata, Masaharu; Ishidate, Miyako; Kuraishi, Masahiko; Doi, Kunio

    2010-07-01

    The first objective in this study was to identify the errors of incidents and accidents that occurred in general and mobile X-ray examinations. Based on the analysis of results, the second purpose in this study was to propose useful measures to prevent such errors. As much as 553 radiological technologists in the Gunma Prefecture were surveyed on their experience with errors related to general and mobile X-ray examinations. The questionnaire asked for descriptions of errors experienced during examinations and the responses given (multiple answers possible), and evaluations of the degree of busyness on a five-point scale. A total of 115 questionnaires were returned. Analysis revealed that there was no significant relationship between errors and degree of busyness for either general or mobile examinations. The most frequent error both in general and in mobile examinations was to X-ray a patient mistakenly, the cause of which was cited as failure to confirm the patient's name. After the use of solution priority number to evaluate proposed preventive measures, such as finger-pointing and call, independent double-checks, and verbal self-confirmation would be the simplest and most easily implemented countermeasure.

  16. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    CERN Document Server

    Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A

    2016-01-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....

  17. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  18. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    Science.gov (United States)

    Sweeney, R. M.; Frassinetti, L.; Brunsell, P.; Fridström, R.; Volpe, F. A.

    2016-12-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of m/n  =  1/-12, where m and n are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the modified Rutherford equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e.g. based on the best EF-cancelling RMP, resulting in the fastest TM rotation). A passive variant of this technique is also presented, where no RMPs are applied, and the EF phase is deduced.

  19. Identification of Error Sources in High Precision Weight Measurements of Gyroscopes

    CERN Document Server

    Lőrincz, I

    2015-01-01

    A number of weight anomalies have been reported in the past with respect to gyroscopes. Much attention was gained from a paper in Physical Review Letters, when Japanese scientists announced that a gyroscope loses weight up to $0.005\\%$ when spinning only in the clockwise rotation with the gyroscope's axis in the vertical direction. Immediately afterwards, a number of other teams tried to replicate the effect, obtaining a null result. It was suggested that the reported effect by the Japanese was probably due to a vibration artifact, however, no final conclusion on the real cause has been obtained. We decided to build a dedicated high precision setup to test weight anomalies of spinning gyroscopes in various configurations. A number of error sources like precession and vibration and the nature of their influence on the measurements have been clearly identified, which led to the conclusive explanation of the conflicting reports. We found no anomaly within $\\Delta m/m<2.6 \\times 10^{-6}$ valid for both horizon...

  20. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    CERN Document Server

    Wilczynska, Michael R; King, Julian A; Murphy, Michael T; Bainbridge, Matthew B; Flambaum, Victor V

    2015-01-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of $0.4 \\leq z_{abs} \\leq 2.3$ observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of $\\Delta\\alpha/\\alpha=\\left(0.22\\pm0.23\\right)\\times10^{-5}$, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of $\\Delta\\alpha/\\alpha$ measurements, th...

  1. Perceived vs. measured effects of advanced cockpit systems on pilot workload and error: are pilots' beliefs misaligned with reality?

    Science.gov (United States)

    Casner, Stephen M

    2009-05-01

    Four types of advanced cockpit systems were tested in an in-flight experiment for their effect on pilot workload and error. Twelve experienced pilots flew conventional cockpit and advanced cockpit versions of the same make and model airplane. In both airplanes, the experimenter dictated selected combinations of cockpit systems for each pilot to use while soliciting subjective workload measures and recording any errors that pilots made. The results indicate that the use of a GPS navigation computer helped reduce workload and errors during some phases of flight but raised them in others. Autopilots helped reduce some aspects of workload in the advanced cockpit airplane but did not appear to reduce workload in the conventional cockpit. Electronic flight and navigation instruments appeared to have no effect on workload or error. Despite this modest showing for advanced cockpit systems, pilots stated an overwhelming preference for using them during all phases of flight.

  2. Comparison of Error Estimations by DERs in One-Port S and SLO Calibrated VNA Measurements and Application

    CERN Document Server

    Yannopoulou, Nikolitsa

    2011-01-01

    In order to demonstrate the usefulness of the only one existing method for systematic error estimations in VNA (Vector Network Analyzer) measurements by using complex DERs (Differential Error Regions), we compare one-port VNA measurements after the two well-known calibration techniques: the quick reflection response, that uses only a single S (Short circuit) standard, and the time-consuming full one-port, that uses a triple of SLO standards (Short circuit, matching Load, Open circuit). For both calibration techniques, the comparison concerns: (a) a 3D geometric representation of the difference between VNA readings and measurements, and (b) a number of presentation figures for the DERs and their polar DEIs (Differential Error Intervals) of the reflection coefficient, as well as, the DERs and their rectangular DEIs of the corresponding input impedance. In this paper, we present the application of this method to an AUT (Antenna Under Test) selected to highlight the existence of practical cases in which the time ...

  3. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    Science.gov (United States)

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects.

  4. Measurement methods on the measurement error analysis of the impact%测量方法对测量结果误差影响的分析

    Institute of Scientific and Technical Information of China (English)

    陈华清

    2012-01-01

    Based on work practices, analysis of measurement results to the causes of errors, in addition to installations, environment and error, methods of measurement errors affect the measuring results is obvious.Then focuses on the difference method, replacing the four kinds of law, compensation law, and the symmetric observation method to reduce the measurement plus the error of measurement methods. The officers concernedto learn and thinking, measurement error and methods of measurement for more research to explore effective programs to reduce the measurement error in the measurement process, thereby improving the accuracy of theentire measurement to ensure measurement of scientific and advanced to further promote the continuous and healthy development of measurement industry.%本文基于工作实践,分析了测量结果产生误差的原因,除了装置、环境和人员的误差之外,测量方法对测量结果造成的误差影响也是明显的。然后,着重介绍了差值法、替代法、补偿法,以及对称观测法4种减小测量加过误差的测量方法。希望有关人员加以借鉴和思考,对测量误差和测量方法进行更多的研究,探讨出行之有效的方案来减小测量过程中出现的测量误差,从而提高整个测量的精确度,保障测量工作的科学性和先进性,进一步促进相关测量行业的不断健康发展。

  5. SU-E-J-88: The Study of Setup Error Measured by CBCT in Postoperative Radiotherapy for Cervical Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Runxiao, L; Aikun, W; Xiaomei, F; Jing, W [The Forth Hospital of Hebei Medical University, Shijiangzhuang, Hebei (China)

    2015-06-15

    Purpose: To compare two registration methods in the CBCT guided radiotherapy for cervical carcinoma, analyze the setup errors and registration methods, determine the margin required for clinical target volume(CTV) extending to planning target volume(PTV). Methods: Twenty patients with cervical carcinoma were enrolled. All patients were underwent CT simulation in the supine position. Transfering the CT images to the treatment planning system and defining the CTV, PTV and the organs at risk (OAR), then transmit them to the XVI workshop. CBCT scans were performed before radiotherapy and registered to planning CT images according to bone and gray value registration methods. Compared two methods and obtain left-right(X), superior-inferior(Y), anterior-posterior (Z) setup errors, the margin required for CTV to PTV were calculated. Results: Setup errors were unavoidable in postoperative cervical carcinoma irradiation. The setup errors measured by method of bone (systemic ± random) on X(1eft.right),Y(superior.inferior),Z(anterior.posterior) directions were(0.24±3.62),(0.77±5.05) and (0.13±3.89)mm, respectively, the setup errors measured by method of grey (systemic ± random) on X(1eft-right), Y(superior-inferior), Z(anterior-posterior) directions were(0.31±3.93), (0.85±5.16) and (0.21±4.12)mm, respectively.The spatial distributions of setup error was maximum in Y direction. The margins were 4 mm in X axis, 6 mm in Y axis, 4 mm in Z axis respectively.These two registration methods were similar and highly recommended. Conclusion: Both bone and grey registration methods could offer an accurate setup error. The influence of setup errors of a PTV margin would be suggested by 4mm, 4mm and 6mm on X, Y and Z directions for postoperative radiotherapy for cervical carcinoma.

  6. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  7. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  8. Temperature measurement errors with thermocouples inside 27 MHz current source interstitial hyperthermia applicators.

    Science.gov (United States)

    Kaatee, R S; Crezee, H; Visser, A G

    1999-06-01

    The multielectrode current source (MECS) interstitial hyperthermia (IHT) system uses thermocouple thermometry. To obtain a homogeneous temperature distribution and to limit the number of traumas due to the implanted catheters, most catheters are used for both heating and thermometry. Implications of temperature measurement inside applicators are discussed. In particular, the impact of self-heating of both the applicator and the afterloading catheter were investigated. A one-dimensional cylindrical model was used to compute the difference between the temperature rise inside the applicators (deltaTin) and in the tissue just outside the afterloading catheter (deltaTout) as a function of power absorption in the afterloading catheter, self-heating of the applicator and the effective thermal conductivity of the surrounding tissue. Furthermore, the relative artefact (ERR), i.e. (deltaTin - deltaTout)/deltaTin, was measured in a muscle equivalent agar phantom at different positions in a dual-electrode applicator and for different catheter materials. A method to estimate the tissue temperature by power-off temperature decay measurement inside the applicator was investigated. Using clinical dual-electrode applicators in standard brachytherapy catheters in a muscle-equivalent phantom, deltaTin is typically twice as high as deltaTout. The main reason for this difference is self-heating of the thin feeder wires in the centre of the applicator. The measurement error caused by energy absorption in the afterloading catheter is small, i.e. even for materials with a high dielectric loss factor it is less than 5%. About 5 s after power has been switched off, Tin in the electrodes represents the maximum tissue temperature just before power-off. This delay time (t(delay)) and ERR are independent of Tin. However, they do depend on the thermal properties of the tissue. Therefore, ERR and t(delay) and their stability in perfused tissues have to be investigated to enable a reliable

  9. Array processing——a new method to detect and correct errors on array resistivity logging tool measurements

    Institute of Scientific and Technical Information of China (English)

    Philip D.RABINOWITZ; Zhiqiang ZHOU

    2007-01-01

    In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based onthe physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.

  10. Simulation on measurement of five-DOF motion errors of high precision spindle with cylindrical capacitive sensor

    Science.gov (United States)

    Zhang, Min; Wang, Wen; Xiang, Kui; Lu, Keqing; Fan, Zongwei

    2015-02-01

    This paper describes a novel cylindrical capacitive sensor (CCS) to measure the spindle five degree-of-freedom (DOF) motion errors. The operating principle and mathematical models of the CCS are presented. Using Ansoft Maxwell software to calculate the different capacitances in different configurations, structural parameters of end face electrode are then investigated. Radial, axial and tilt motions are also simulated by making comparisons with the given displacements and the simulation values respectively. It could be found that the proposed CCS has a high accuracy for measuring radial motion error when the average eccentricity is about 15 μm. Besides, the maximum relative error of axial displacement is 1.3% when the axial motion is within [0.7, 1.3] mm, and the maximum relative error of the tilt displacement is 1.6% as rotor tilts around a single axis within [-0.6, 0.6]°. Finally, the feasibility of the CCS for measuring five DOF motion errors is verified through simulation and analysis.

  11. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    NARCIS (Netherlands)

    Agogo, G.O.; Voet, van der H.; Veer, van 't P.; Ferrari, P.; Leenders, M.; Muller, D.C.; Sánchez-Cantalejo, E.; Bamia, C.; Braaten, T.; Knüppel, S.; Johansson, I.; Eeuwijk, van F.A.; Boshuizen, H.C.

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m

  12. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design : EPIC Case Study

    NARCIS (Netherlands)

    Agogo, George O; der Voet, Hilko van; Veer, Pieter Van't; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m

  13. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  14. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    Science.gov (United States)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  15. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  16. Variations in the Bivariate Brightness Distribution with different galaxy types

    CERN Document Server

    Cross, N; Lemon, D; Liske, J; Cross, Nicholas; Driver, Simon; Lemon, David; Liske, Jochen

    2002-01-01

    We present Bivariate Brightness Distributions (BBDs) for four spectral types discriminated by the 2dFGRS. We discuss the photometry and completeness of the 2dFGRS using a deep, wide-field CCD imaging survey. We find that there is a strong luminosity-surface brightness correlation amongst galaxies with medium to strong emission features, with gradient $\\beta_{\\mu}=0.25\\pm0.05$ and width $\\sigma_{\\mu}=0.56\\pm0.01$. Strong absorption line galaxies, show a bimodal distribution, with no correlation between luminosity and surface brightness.

  17. 振动测量误差影响因素探析%Vibration Measurement Error Inlfuence Factors Analysis

    Institute of Scientific and Technical Information of China (English)

    鲍耀翔; 陈武

    2016-01-01

    随着科学技术的进步,在具体的操作中,对于振动测量误差的重视度也在逐渐提升,为了满足振动测量的精度要求,避免误差的出现,本文通过振动测量方法对振动测量误差的影响因素以及改进的方式进行分析,希望能够避免误差,提高精度。%Along with the progress of science and technology, in the concrete operation, the vibration stress measurement error are also gradually improve, in order to satisfy the requirement of the vibration measuring precision, avoid the occurrence of error. So, in this article, through the pen name of vibration measurement method, the vibration measurement error analysis of the influencing factors and improve way hoped to avoid error, improve accuracy.

  18. Analysis of the Largest Normalized Residual Test Robustness for Measurements Gross Errors Processing in the WLS State Estimator

    Directory of Open Access Journals (Sweden)

    Breno Carvalho

    2013-10-01

    Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.

  19. Asymptotic and Sampling-Based Standard Errors for Two Population Invariance Measures in the Linear Equating Case

    Science.gov (United States)

    Rijmen, Frank; Manalo, Jonathan R.; von Davier, Alina A.

    2009-01-01

    This article describes two methods for obtaining the standard errors of two commonly used population invariance measures of equating functions: the root mean square difference of the subpopulation equating functions from the overall equating function and the root expected mean square difference. The delta method relies on an analytical…

  20. Doubly-Latent Models of School Contextual Effects: Integrating Multilevel and Structural Equation Approaches to Control Measurement and Sampling Error

    Science.gov (United States)

    Marsh, Herbert W.; Ludtke, Oliver; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthen, Bengt; Nagengast, Benjamin

    2009-01-01

    This article is a methodological-substantive synergy. Methodologically, we demonstrate latent-variable contextual models that integrate structural equation models (with multiple indicators) and multilevel models. These models simultaneously control for and unconfound measurement error due to sampling of items at the individual (L1) and group (L2)…

  1. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently p

  2. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...

  3. Measuring and Detecting Errors in Occupational Coding: an Analysis of SHARE Data

    Directory of Open Access Journals (Sweden)

    Belloni Michele

    2016-12-01

    Full Text Available This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE using a high-quality software program for ex-post coding (CASCOT software. Taking CASCOT coding as our benchmark, our results suggest that the incidence of coding errors in SHARE is high, even when the comparison is made at the level of one-digit occupational codes (28% for last job and 30% for current job. This finding highlights the complexity of occupational coding and suggests that processing errors due to miscoding should be taken into account when undertaking statistical analyses or writing econometric models. Our analysis suggests strategies to alleviate such coding errors, and we propose a set of equations that can predict error. These equations may complement coding software and improve the quality of occupational coding.

  4. First measurement and correction of nonlinear errors in the experimental insertions of the CERN Large Hadron Collider

    Science.gov (United States)

    Maclean, E. H.; Tomás, R.; Giovannozzi, M.; Persson, T. H. B.

    2015-12-01

    Nonlinear magnetic errors in low-β insertions can contribute significantly to detuning with amplitude, linear and nonlinear chromaticity, and lead to degradation of dynamic aperture and beam lifetime. As such, the correction of nonlinear errors in the experimental insertions of colliders can be of critical significance for successful operation. This is expected to be of particular relevance to the LHC's second run and its high luminosity upgrade, as well as to future colliders such as the Future Circular Collider. Current correction strategies envisioned for these colliders assume it will be possible to calculate optimized local corrections through the insertions, using a magnetic model of the errors. This paper shows however, that reliance purely upon magnetic measurements of the nonlinear errors of insertion elements is insufficient to guarantee a good correction quality in the relevant low-β* regime. It is possible to perform beam-based examination of nonlinear magnetic errors via the feed-down to readily observed beam properties upon application of closed orbit bumps, and methods based upon feed-down to tune have been utilized at RHIC, SIS18, and SPS. This paper demonstrates the extension of such methodology to include direct observation of feed-down to linear coupling in the LHC. It is further shown that such beam-based studies can be used to complement magnetic measurements performed during LHC construction, in order to validate and refine the magnetic model of the collider. Results from first attempts of the measurement and correction of nonlinear errors in the LHC experimental insertions are presented. Several discrepancies of beam-based studies with respect to the LHC magnetic model are reported.

  5. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  6. Effect of measurement error on tests of density dependence of catchability for walleyes in northern Wisconsin angling and spearing fisheries

    Science.gov (United States)

    Hansen, M.J.; Beard, T.D.; Hewett, S.W.

    2005-01-01

    We sought to determine how much measurement errors affected tests of density dependence of spearing and angling catchability for walleye Sander vitreus by quantifying relationships between spearing and angling catch rates (catch/h) and walleye population density (number/acre) in northern Wisconsin lakes. The mean measurement error of spearing catch rates was 43.5 times greater than the mean measurement error of adult walleye population densities, whereas the mean measurement error of angling catch rates was only 5.6 times greater than the mean measurement error of adult walleye population densities. The bias-corrected estimate of the relationship between spearing catch rate and adult walleye population density was similar to the ordinary-least-squares regression estimate but differed significantly from the geometric mean (GM) functional regression estimate. In contrast, the bias-corrected estimate of the relationship between angling catch rate and total walleye population density was intermediate between ordinary-least-squares and GM functional regression estimates. Catch rates of walleyes in both spearing and angling fisheries were not linearly related to walleye population density, which indicated that catch rates in both fisheries were hyperstable in relation to walleye population density. For both fisheries, GM functional regression overestimated the degree of hyperdepletion in catch rates and ordinary-least-squares regression overestimated the degree of hyperstability in catch rates. However, ordinary-least-squares regression induced significantly less bias in tests of density dependence than GM functional regression, so it may be suitable for testing the degree of density dependence in fisheries for which fish population density is estimated with mark-recapture methods similar to those used in our study. ?? Copyright by the American Fisheries Society 2005.

  7. Demonstration of a quantum error correction for enhanced sensitivity of photonic measurements

    Science.gov (United States)

    Cohen, L.; Pilnyak, Y.; Istrati, D.; Retzker, A.; Eisenberg, H. S.

    2016-07-01

    The sensitivity of classical and quantum sensing is impaired in a noisy environment. Thus, one of the main challenges facing sensing protocols is to reduce the noise while preserving the signal. State-of-the-art quantum sensing protocols that rely on dynamical decoupling achieve this goal under the restriction of long noise correlation times. We implement a proof-of-principle experiment of a protocol to recover sensitivity by using an error correction for photonic systems that does not have this restriction. The protocol uses a protected entangled qubit to correct a single error. Our results show a recovery of about 87 % of the sensitivity, independent of the noise probability.

  8. Family-based bivariate association tests for quantitative traits.

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    Full Text Available The availability of a large number of dense SNPs, high-throughput genotyping and computation methods promotes the application of family-based association tests. While most of the current family-based analyses focus only on individual traits, joint analyses of correlated traits can extract more information and potentially improve the statistical power. However, current TDT-based methods are low-powered. Here, we develop a method for tests of association for bivariate quantitative traits in families. In particular, we correct for population stratification by the use of an integration of principal component analysis and TDT. A score test statistic in the variance-components model is proposed. Extensive simulation studies indicate that the proposed method not only outperforms approaches limited to individual traits when pleiotropic effect is present, but also surpasses the power of two popular bivariate association tests termed FBAT-GEE and FBAT-PC, respectively, while correcting for population stratification. When applied to the GAW16 datasets, the proposed method successfully identifies at the genome-wide level the two SNPs that present pleiotropic effects to HDL and TG traits.

  9. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  10. The reconstructed residual error: a novel segmentation evaluation measure for reconstructed images in tomography

    NARCIS (Netherlands)

    Roelandts, T.; Batenburg, K.J.; Dekker, A.J. den; Sijbers, J.

    2014-01-01

    In this paper, we present the reconstructed residual error, which evaluates the quality of a given segmentation of a reconstructed image in tomography. This novel evaluation method, which is independent of the methods that were used to reconstruct and segment the image, is applicable to segmentation

  11. Food Stamps and Food Insecurity: What Can Be Learned in the Presence of Nonclassical Measurement Error?

    Science.gov (United States)

    Gundersen, Craig; Kreider, Brent

    2008-01-01

    Policymakers have been puzzled to observe that food stamp households appear more likely to be food insecure than observationally similar eligible nonparticipating households. We reexamine this issue allowing for nonclassical reporting errors in food stamp participation and food insecurity. Extending the literature on partially identified…

  12. A new method to reduce truncation errors in partial spherical near-field measurements

    DEFF Research Database (Denmark)

    Cano-Facila, F J; Pivnenko, Sergey

    2011-01-01

    angular sector as well as a truncation error is present in the calculated far-field pattern within this sector. The method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward...

  13. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    Science.gov (United States)

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.

    2012-01-01

    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  14. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd.

  15. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  16. Reconciliation of size-density bivariate distributions over a separating node

    Institute of Scientific and Technical Information of China (English)

    Bidarahalli Venkoba Rao; Vivek Ganvir; Sirigeri Jois Gopalakrishna

    2008-01-01

    Data reconciliation considers the restoration of mass balance among the noise prone measured data by way of component adjustments for the various particle size or particle density classes or assays over the separating node. In this paper, the method of Lagrange multipliers has been extended to balance bivariate feed and product size-density distributions of coal particles split from a settling column. The settling suspension in the column was split into two product fractions at 40% height from the bottom after a minute settling of homogenized suspension at start. Reconciliation of data assists to estimate solid flow split of particles to the settled stream as well as helps to calculate the profiles of partition curves of the marginal particle size or particle density distributions. In general, Lagrange multiplier method with uniform weighting of its components may not guarantee a smooth partition surface and thus the reconciled data needs further refinement to establish the nature of the surface. In order to overcome this difficulty, a simple alternative method of reconciling bivariate size-density data using partition surface concept is explored in this paper.

  17. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Science.gov (United States)

    Dichev, D.; Koev, H.; Bakalova, T.; Louda, P.

    2014-08-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  18. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  20. Modified likelihood ratio tests in heteroskedastic multivariate regression models with measurement error

    CERN Document Server

    Melo, Tatiane F N; Patriota, Alexandre G

    2012-01-01

    In this paper, we develop a modified version of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard adjusted likelihood ratio statistic, which follows a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed test displays superior finite sample behavior as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.

  1. Retrieval of relative humidity profiles and its associated error from Megha-Tropiques measurements

    Science.gov (United States)

    Sivira, R.; Brogniez, H.; Mallet, C.; Oussar, Y.

    2013-05-01

    The combination of the two microwave radiometers, SAPHIR and MADRAS, on board the Megha-Tropiques platform is explored to define a retrieval method that estimates not only the relative humidity profile but also the associated confidence intervals. A comparison of three retrieval models was performed, in equal conditions of input and output data sets, through their statistical values (error variance, correlation coefficient and error mean) obtaining a profile of seven layers of relative humidity. The three models show the same behavior with respect to layers, mid-tropospheric layers reaching the best statistical values suggesting a model-independent problem. Finally, the study of the probability density function of the relative humidity at a given atmospheric pressure further gives insight of the confidence intervals.

  2. Bias and spread in extreme value theory measurements of probability of error

    Science.gov (United States)

    Smith, J. G.

    1972-01-01

    Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.

  3. THE INSTABILITY DEGREE IN THE DIEMNSION OF SPACES OF BIVARIATE SPLINE

    Institute of Scientific and Technical Information of China (English)

    Zhiqiang Xu; Renhong Wang

    2002-01-01

    In this paper, the dimension of the spaces of bivariate spline with degree less that 2r and smoothness order r on the Morgan-Scott triangulation is considered. The concept of the instability degree in the dimension of spaces of bivariate spline is presented. The results in the paper make us conjecture the instability degree in the dimension of spaces of bivariate spline is infinity.

  4. THE NORMAL BIVARIATE DENSITY FUNCTION AND ITS APPLICATIONS TO WEAPON SYSTEMS ANALYSIS, A REVIEW

    Science.gov (United States)

    The normal bivariate density function is derived from a priori considerations. It is discussed in terms of probability area in a plane, and as a...correlation surface. Several numerical methods of solving the normal bivariate distribution double integral are presented, and a curve is included for...given specific mathematical treatment. An Appendix examines the elliptical properties of normally correlated distributions. The investigation has resulted in a reference paper for the normal bivariate density function.

  5. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  6. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model.

    Science.gov (United States)

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J

    2014-12-10

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.

  7. Assessment of measurement error due to sampling perspective in the space-based Doppler lidar wind profiler

    Science.gov (United States)

    Houston, S. H.; Emmitt, G. D.

    1986-01-01

    A Multipair Algorithm (MPA) has been developed to minimize the contribution of the sampling error in the simulated Doppler lidar wind profiler measurements (due to angular and spatial separation between shots in a shot pair) to the total measurement uncertainty. Idealized wind fields are used as input to the profiling model, and radial wind estimates are passed through the MPA to yield a wind measurement for 300 x 300 sq km areas. The derived divergence fields illustrate the gradient patterns that are particular to the Doppler lidar sampling strategy and perspective.

  8. Counterfactual Distributions in Bivariate Models—A Conditional Quantile Approach

    Directory of Open Access Journals (Sweden)

    Javier Alejo

    2015-11-01

    Full Text Available This paper proposes a methodology to incorporate bivariate models in numerical computations of counterfactual distributions. The proposal is to extend the works of Machado and Mata (2005 and Melly (2005 using the grid method to generate pairs of random variables. This contribution allows incorporating the effect of intra-household decision making in counterfactual decompositions of changes in income distribution. An application using data from five latin american countries shows that this approach substantially improves the goodness of fit to the empirical distribution. However, the exercise of decomposition is less conclusive about the performance of the method, which essentially depends on the sample size and the accuracy of the regression model.

  9. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  10. Study on Laser Visual Measurement Method for Seamless Steel PipeStraightness Error by Multiple Line-structured Laser Sensors

    Institute of Scientific and Technical Information of China (English)

    陈长水; 谢建平; 王佩琳

    2001-01-01

    An original non-contact measurement method using multiple line-structured laser sensors is introduced for seamless steel pipe straightness error is in this paper. An arc appears on the surface of the measured seamless steel pipe against a line-structured laser source. After the image of the arc is accepted by a CCD camera, the coordinates of the center of the pipe cross-section circle containing the arc can be worked out through a certain algorithm. Similarly, multiple line-structured laser sensors are equipped parallel to the pipe. The straightness error of the seamless steel pipe, therefore, can be inferred from the coordinates of multiple cross-section centers obtained from every line-structured laser sernsor .

  11. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1976-01-01

    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  12. Can the Misinterpretation Amendment Rate Be Used as a Measure of Interpretive Error in Anatomic Pathology?: Implications of a Survey of the Directors of Anatomic and Surgical Pathology.

    Science.gov (United States)

    Parkash, Vinita; Fadare, Oluwole; Dewar, Rajan; Nakhleh, Raouf; Cooper, Kumarasen

    2017-03-01

    A repeat survey of the Association of the Directors of Anatomic and Surgical Pathology, done 10 years after the original was used to assess trends and variability in classifying scenarios as errors, and the preferred post signout report modification for correcting error by the membership of the Association of the Directors of Anatomic and Surgical Pathology. The results were analyzed to inform on whether interpretive amendment rates might act as surrogate measures of interpretive error in pathology. An analyses of the responses indicated that primary level misinterpretations (benign to malignant and vice versa) were universally qualified as error; secondary-level misinterpretations or misclassifications were inconsistently labeled error. There was added variability in the preferred post signout report modification used to correct report alterations. The classification of a scenario as error appeared to correlate with severity of potential harm of the missed call, the perceived subjectivity of the diagnosis, and ambiguity of reporting terminology. Substantial differences in policies for error detection and optimal reporting format were documented between departments. In conclusion, the inconsistency in labeling scenarios as error, disagreement about the optimal post signout report modification for the correction of the error, and variability in error detection policies preclude the use of the misinterpretation amendment rate as a surrogate measure for error in anatomic pathology. There is little change in uniformity of definition, attitudes and perception of interpretive error in anatomic pathology in the last 10 years.

  13. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  14. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    Science.gov (United States)

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  15. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects

    Directory of Open Access Journals (Sweden)

    Karyn Heavner

    2015-08-01

    Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  16. FITTING CURVES TO DESCRIBE ERRORS OF INDICATIONS IN USE OF MEASURING INSTRUMENTS

    OpenAIRE

    2013-01-01

    Los instrumentos de medición usualmente se calibran en valores discretos pero es muy útil para los usuarios tener formulas para describir los errores de indicación (y sus incertidumbres) como función de las lecturas en todo el rango del instrumento calibrado.En este documento se han analizado diferentes métodos para calibrar los instrumentos que se usan para pesar, sin embargo estos métodos pueden ser utilizados para otros instrumentos de medición.

  17. Retrievals from GOMOS stellar occultation measurements using characterization of modeling errors

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2010-02-01

    Full Text Available In this paper, we discuss the development of the inversion algorithm for the GOMOS (Global Ozone Monitoring by Occultation of Star instrument on board the Envisat satellite. The proposed algorithm takes accurately into account the wavelength-dependent modeling errors, which are mainly due to the incomplete scintillation correction in the stratosphere. The special attention is paid to numerical efficiency of the algorithm. The developed method is tested on a large data set and its advantages are demonstrated. Its main advantage is a proper characterization of the uncertainties of the retrieved profiles of atmospheric constituents, which is of high importance for data assimilation, trend analyses and validation.

  18. Retrievals from GOMOS stellar occultation measurements using characterization of modeling errors

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2010-08-01

    Full Text Available In this paper, we discuss the development of the inversion algorithm for the GOMOS (Global Ozone Monitoring by Occultation of Star instrument on board the Envisat satellite. The proposed algorithm takes accurately into account the wavelength-dependent modeling errors, which are mainly due to the incomplete scintillation correction in the stratosphere. The special attention is paid to numerical efficiency of the algorithm. The developed method is tested on a large data set and its advantages are demonstrated. Its main advantage is a proper characterization of the uncertainties of the retrieved profiles of atmospheric constituents, which is of high importance for data assimilation, trend analyses and validation.

  19. Hand-held dynamometry in patients with haematological malignancies: Measurement error in the clinical assessment of knee extension strength

    Directory of Open Access Journals (Sweden)

    Uebelhart Daniel

    2009-03-01

    Full Text Available Abstract Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD. Methods Two observers performed maximum voluntary peak torque measurements (MVPT for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI, the standard error of measurement (SEM, the smallest detectable difference (SDD, the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97 and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94. The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95 and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90. The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM and 9.83 Nm (5.88% of GM. For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM and 11.41 Nm (6.73% of GM. The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM to 27.26 Nm (17.09% of GM for intra-observer measurements, and 26.76 Nm (16.77% of GM to 31.62 Nm (18.66% of GM for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability

  20. On Measurement of Efficiency of Cobb-Douglas Production Function with Additive and Multiplicative Errors

    Directory of Open Access Journals (Sweden)

    Md. Moyazzem Hossain

    2015-02-01

    Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.