Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components
Zhang, Saijuan
2011-01-06
There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole
On errors of a unified family of approximation forms of bivariate continuous functions
Chung-Siung Kao
2003-01-01
Approximation forms for a regular bivariate functions $f(x,y)$ were obtained by putting expectation on a convergent bivariate stochastic sequence for which some proper error bounds are herein derived to evaluate the applicability of the approximation forms when actually applied to be approximates of regular bivariate functions.
Dependence Measures in Bivariate Gamma Frailty Models
van den Berg, Gerard J.; Effraimidis, Georgios
2014-01-01
Bivariate duration data frequently arise in economics, biostatistics and other areas. In "bivariate frailty models", dependence between the frailties (i.e., unobserved determinants) induces dependence between the durations. Using notions of quadrant dependence, we study restrictions that this imposes on the implied dependence of the durations, if the frailty terms act multiplicatively on the corresponding hazard rates. Marginal frailty distributions are often taken to be gamma distributions. ...
A New Measure Of Bivariate Asymmetry And Its Evaluation
In this paper we propose a new measure of bivariate asymmetry, based on conditional correlation coefficients. A decomposition of the Pearson correlation coefficient in terms of its conditional versions is studied and an example of application of the proposed measure is given.
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
Wen-Jen Tsay; Peng-Hsuan Ke
2009-01-01
A simple approximation for the bivariate normal cumulative distribution function (BNCDF) based on the error function is derived. The worst error of our method is found to four decimal places under various configurations considered in this paper's Table 1. This finding is much better than that in Table 1 of Cox and Wermuth (1991) and in Table 1 of Lin (1995) where the worst error of both tables is up to 3 decimal places. We also apply the proposed method to approximate the likelihood function ...
Job Mobility and Measurement Error
Bergin, Adele
2011-01-01
This thesis consists of essays investigating job mobility and measurement error. Job mobility, captured here as a change of employer, is a striking feature of the labour market. In empirical work on job mobility, researchers often depend on self-reported tenure data to identify job changes. There may be measurement error in these responses and consequently observations may be misclassified as job changes when truly no change has taken place and vice versa. These observations serve as a starti...
On bivariate geometric distribution
Jayakumar, K.; Davis Antony Mundassery
2013-01-01
Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.
Errors in Chemical Sensor Measurements
Artur Dybko
2001-06-01
Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.
EWMA Chart and Measurement Error
Maravelakis, Petros; Panaretos, John; Psarakis, Stelios
2004-01-01
Measurement error is a usually met distortion factor in real-world applications that influences the outcome of a process. In this paper, we examine the effect of measurement error on the ability of the EWMA control chart to detect out-of-control situations. The model used is the one involving linear covariates. We investigate the ability of the EWMA chart in the case of a shift in mean. The effect of taking multiple measurements on each sampled unit and the case of linearly increasing varianc...
Measuring verification device error rates
A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix
Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025
Correction of errors in power measurements
Pedersen, Knud Ole Helgesen
1998-01-01
Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....
Better Stability with Measurement Errors
Argun, Aykut; Volpe, Giovanni
2016-06-01
Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.
POTASSIUM MEASUREMENT: CAUSES OF ERRORS IN MEASUREMENT
Kavitha; Omprakash
2014-01-01
It is not a easy task to recognize the errors in potassium measurement in the lab. Falsely elevated potassium levels if goes unrecognized by the lab and clinician, it is difficult to treat masked hypokalemic state, which is again a medical emergency. Such cases require proper monitoring by the clinician, so that cases with such history of pseudohyperkalemia which cannot be easily identified in the laboratory should not go unrecognized by clinician. The aim of this article is t...
Measurement error in a single regressor
Meijer, H.J.; Wansbeek, T.J.
2000-01-01
For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Measurement error models, methods, and applications
Buonaccorsi, John P
2010-01-01
Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres
Error calculations statistics in radioactive measurements
Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test
KMRR thermal power measurement error estimation
The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power
Generalized bivariate Fibonacci polynomials
Catalani, Mario
2002-01-01
We define generalized bivariate polynomials, from which upon specification of initial conditions the bivariate Fibonacci and Lucas polynomials are obtained. Using essentially a matrix approach we derive identities and inequalities that in most cases generalize known results.
Assessing Measurement Error in Medicare Coverage
U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...
Nonclassical measurements errors in nonlinear models
Madsen, Edith; Mulalic, Ismir
measurement errors is symmetric and the distribution of the underlying true income is skewed then there are valid technical instruments. We investigate how this IV estimation approach works in theory and illustrate it by simulation studies using the findings about the measurement error model for income from......Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... the classical measurement error model (for the logarithm to income) is valid except in the tails of the income distribution where those with low (high) income tends to over (under) report. In addition we find that the marginal distribution of the measurement errors is symmetric and leptokurtic and...
Bivariate discrete Linnik distribution
Davis Antony Mundassery
2014-10-01
Full Text Available Christoph and Schreiber (1998a studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.
Bivariate discrete Linnik distribution
Davis Antony Mundassery; Jayakumar, K.
2014-01-01
Christoph and Schreiber (1998a) studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.
Measuring Cyclic Error in Laser Heterodyne Interferometers
Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter
2010-01-01
An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-
Measurement error in longitudinal film badge data
Marsh, J L
2002-01-01
Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...
Methodological errors in radioisotope flux measurements
Egnor, R.W.; Vaccarezza, S.G.; Charney, A.N. (New York Univ. School of Medicine, New York (USA))
1988-11-01
The authors examined several sources of error in isotopic flux measurements in a commonly used experimental model: the study of {sup 22}Na and {sup 36}Cl fluxes across rat ileal tissue mounted in the Ussing flux chamber. The experiment revealed three important sources of error: the absolute counts per minute, the difference in counts per minute between serial samples, and averaging of serial samples. By computer manipulation, they then applied hypothetical changes in the experimental protocol to generalize these findings and assess the effect and interaction of the absolute counts per minute, the sampling interval, and the counting time on the magnitude of the error. They found that the error of a flux measurement will vary inversely with the counting time and the difference between the consecutive sample counts per minute used in the flux calculations and will vary directly with the absolute counts per minute of each sample. Alteration of the hot side specific activity, the surface area of the tissue across which flux is measured and the sample volume have a smaller impact on measurement error. Experimental protocols should be designed with these methodological considerations in mind to minimize the error inherent in measuring isotope flux.
Measurement error analysis of taxi meter
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
Statistical error analysis of reactivity measurement
Thammaluckan, Sithisak; Hah, Chang Joo [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)
2013-10-15
After statistical analysis, it was confirmed that each group were sampled from same population. It is observed in Table 7 that the mean error decreases as core size increases. Application of bias factor obtained from this research reduces mean error further. The point kinetic model had been used to measure control rod worth without 3D spatial information of neutron flux or power distribution, which causes inaccurate result. Dynamic Control rod Reactivity Measurement (DCRM) was employed to take into account of 3D spatial information of flux in the point kinetics model. The measured bank worth probably contains some uncertainty such as methodology uncertainty and measurement uncertainty. Those uncertainties may varies with size of core and magnitude of reactivity. The goal of this research is to investigate the effect of core size and magnitude of control rod worth on the error of reactivity measurement using statistics.
Systematic Errors in Black Hole Mass Measurements
McConnell, Nicholas J.
2014-01-01
Compilations of stellar- and gas-dynamical measurements of supermassive black holes are often assembled without quantifying systematic errors from various assumptions in the dynamical modeling processes. Using a simple Monte-Carlo approach, I will discuss the level to which different systematic effects could bias scaling relations between black holes and their host galaxies. Given that systematic errors will not be eradicated in the near future, how wrong can we afford to be?
Quantifying and handling errors in instrumental measurements using the measurement error theory
Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.
2003-01-01
Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species. This...... instrumental measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction...... is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x-measurements...
Sills, E Scott
2011-12-02
Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.
Bivariate Uniform Deconvolution
Benešová, Martina; van Es, Bert; Tegelaar, Peter
2011-01-01
We construct a density estimator in the bivariate uniform deconvolution model. For this model we derive four inversion formulas to express the bivariate density that we want to estimate in terms of the bivariate density of the observations. By substituting a kernel density estimator of the density of the observations we then get four different estimators. Next we construct an asymptotically optimal convex combination of these four estimators. Expansions for the bias, variance, as well as asym...
Measurement Error in Access to Markets
Javier Escobal; Sonia Laszlo
2005-01-01
Studies in the microeconometric literature increasingly utilize distance to or time to reach markets or social services as determinants of economic issues. These studies typically use self-reported measures from survey data, often characterized by non-classical measurement error. This paper is the first validation study of access to markets data. New and unique data from Peru allow comparison of self-reported variables with scientifically calculated variables. We investigate the determinants ...
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow bot...
Zhan Zhiqiang
2016-01-01
Full Text Available In digital modulation quality parameters traceability, the Error Vector Magnitude, Magnitude Error and Phase Error must be traced, and the measurement uncertainty of the above parameters needs to be assessed. Although the calibration specification JJF1128-2004 Calibration Specification for Vector Signal Analyzers is published domestically, the measurement uncertainty evaluation is unreasonable, the parameters selected is incorrect, and not all error terms are selected in measurement uncertainty evaluation. This article lists formula about magnitude error and phase error, than presents the measurement uncertainty evaluation processes for magnitude error and phase errors.
Errors in practical measurement in surveying, engineering, and technology
This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems
Christian Meyer
2009-01-01
We collect well known and less known facts about the bivariate normal distribution and translate them into copula language. In addition, we prove a very general formula for the bivariate normal copula, we compute Gini's gamma, and we provide improved bounds and approximations on the diagonal.
Morgenstern type bivariate Lindley Distribution
V S Vaidyanathan
2016-06-01
Full Text Available In this paper, a bivariate Lindley distribution using Morgenstern approach is proposed which can be used for modeling bivariate life time data. Some characteristics of the distribution like moment generating function, joint moments, Pearson correlation coefficient, survival function, hazard rate function, mean residual life function, vitality function and stress-strength parameter R=Pr(Y
REGIONAL DISTRIBUTION OF MEASUREMENT ERROR IN DTI
Marenco, Stefano; Rawlings, Robert; Rohde, Gustavo K.; Barnett, Alan S.; Honea, Robyn A.; Pierpaoli, Carlo; Weinberger, Daniel R.
2006-01-01
The characterization of measurement error is critical in assessing the significance of diffusion tensor imaging (DTI) findings in longitudinal and cohort studies of psychiatric disorders. We studied 20 healthy volunteers each one scanned twice (average interval between scans of 51 ± 46.8 days) with a single shot echo planar DTI technique. Inter-session variability for fractional anisotropy (FA) and Trace (D) was represented as absolute variation (standard deviation within subjects: SDw), perc...
Bivariate extreme value distributions
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Orthogonality of inductosyn angle-measuring system error and error-separating technology
任顺清; 曾庆双; 王常虹
2003-01-01
Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2016-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave
This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median...
Giuseppe Arbia
2007-10-01
Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.
On positive bivariate quartic forms
Sharipov, Ruslan
2015-01-01
A bivariate quartic form is a homogeneous bivariate polynomial of degree four. A criterion of positivity for such a form is known. In the present paper this criterion is reformulated in terms of pseudotensorial invariants of the form.
Error analysis and data reduction for interferometric surface measurements
Zhou, Ping
High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave
2016-01-01
This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.
朱复康; 王德军
2007-01-01
In this paper, we consider median unbiased estimation of bivariate predictive regression models with non-normal, heavy-tailed or heterescedastic errors. We construct confidence intervals and median unbiased estimator for the parameter of interest. We show that the proposed estimator has better predictive potential than the usual least squares estimator via simulation. An empirical application to finance is given. And a possible extension of the estimation procedure to cointegration models is also described.
Triphasic MRI of pelvic organ descent: sources of measurement error
Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)
2005-05-01
Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.
Quantum Estimation Theory of Error and Disturbance in Quantum Measurement
Watanabe, Yu; Ueda, Masahito
2011-01-01
We formulate the error and disturbance in quantum measurement by invoking quantum estimation theory. The disturbance formulated here characterizes the non-unitary state change caused by the measurement. We prove that the product of the error and disturbance is bounded from below by the commutator of the observables. We also find the attainable bound of the product.
A measurement error model for microarray data analysis
ZHOU Yiming; CHENG Jing
2005-01-01
Microarray technology has been widely used to analyze the gene expression levels by detecting fluorescence intensity in a high throughput fashion. However, since the measurement error produced from various sources in microarray experiments is heterogeneous and too large to be ignored, we propose here a measurement error model for microarray data processing, by which the standard deviation of the measurement error is demonstrated to be linearly increased with fluorescence intensity. A robust algorithm, which estimates the parameters of the measurement error model from a single microarray without replicated spots, is provided. The model and algorithm for estimating of the parameters from a given data set are tested on both the real data set and the simulated data set, and the result has been proven satisfactory. And, combining the measurement error model with traditional Z-test method, a full statistical model has been developed. It can significantly improve the statistical inference for identifying differentially expressed genes.
Bivariate Exponentiated Modified Weibull Extension
El-Gohary, A.; El-Morshedy, M.
2015-01-01
In this paper, we introduce a new bivariate distribution we called it bivariate expo- nentiated modified Weibull extension distribution (BEMWE). The model introduced here is of Marshall-Olkin type. The marginals of the new bivariate distribution have exponentiated modified Weibull extension distribution which proposed by Sarhan et al.(2013). The joint probability density function and the joint cumulative distribu- tion function are in closed forms. Several properties of this distribution have...
The error analysis and online measurement of linear slide motion error in machine tools
Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.
2002-06-01
A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.
Measurement error caused by spatial misalignment in environmental epidemiology
Gryparis, A; Paciorek, CJ; Zeka, A; Schwartz, J; Coull, BA
2008-01-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement ...
Improvement of method of estimating the measurement errors
NMCC estimates random and systematic measurement errors based on the operator's and inspector's data to evaluate operator's measurement performance as one of our work. We use these estimated measurement errors to other works including evaluation of MUF (material unaccounted for), significant test of operator-inspector difference and so on. Therefore, accurate estimation of measurement errors is important in terms of evaluating operator's declared data. The method always provides positive error variances and shows probability density functions of error variances while current method provides error variances as point estimators and they sometimes are negative values. The method allows operators to evaluate measurement errors by using their own data set for the comparison with International Target Values (ITVs). We tested the performance of the method by simulation for the purpose of selecting a best estimate to use in the evaluation of operator's measurement errors from values provided by the probability density function such as median, expected value and checking if a provided confidential interval was valid. In addition, we tested the practical performance of the method by confirming the consistence with current method. (author)
ERROR COMPENSATION OF COORDINATE MEASURING MACHINES WITH LOW STIFFNESS
无
2000-01-01
A technique for compensating the errors of coordinate measuring machines (CMMs) with low stiffness is proposed. Some additional it ems related with the force deformation are introduced to the error compensation equations. The research was carried on a moving column horizontal arm CMM. Experimental results show that both the effects of systematic components of error motions and force deformations are greatly reduced, which shows the effectiveness o proposed technique.
Sampling errors in rainfall measurements by weather radar
Piccolo, F.; G. B. Chirico
2005-01-01
International audience Radar rainfall data are affected by several types of error. Beside the error in the measurement of the rainfall reflectivity and its transformation into rainfall intensity, random errors can be generated by the temporal spacing of the radar scans. The aim of this work is to analize the sensitivity of the estimated rainfall maps to the radar sampling interval, i.e. the time interval between two consecutive radar scans. This analysis has been performed employing data c...
Testing for a Single-Factor Stochastic Volatility in Bivariate Series
Masaru Chiba
2013-12-01
Full Text Available This paper proposes the Lagrange multiplier test for the null hypothesis thatthe bivariate time series has only a single common stochastic volatility factor and noidiosyncratic volatility factor. The test statistic is derived by representing the model in alinear state-space form under the assumption that the log of squared measurement error isnormally distributed. The empirical size and power of the test are examined in Monte Carloexperiments. We apply the test to the Asian stock market indices.
A Bayesian semiparametric model for bivariate sparse longitudinal data.
Das, Kiranmoy; Li, Runze; Sengupta, Subhajit; Wu, Rongling
2013-09-30
Mixed-effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model-based approach and estimate the error-covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random-effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. PMID:23553747
Valuation Biases, Error Measures, and the Conglomerate Discount
I. Dittmann (Ingolf); E.G. Maug (Ernst)
2006-01-01
textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the
Method for online measurement of optical current transformer onsite errors
This paper describes a method for the online measurement of an optical current transformer (OCT) onsite errors comparing with a conventional electromagnetic current transformer (CT) as the reference transformer. The OCT under measurement is connected in series with the reference electromagnetic CT in the same line bay. The secondary output signals of the OCT and the electromagnetic CT are simultaneously collected and processed using a digital signal processing technique. The tests developed on a prototype clearly indicate that the method is very suitable for measuring errors of the OCT onsite without an interruption in the service. The onsite error characteristics of the OCT are analyzed, as well as the stability and repeatability. (paper)
Claudia Lamina
Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.
Measurement errors in cirrus cloud microphysical properties
H. Larsen
Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.
Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques
ON GENERALIZED SARMANOV BIVARIATE DISTRIBUTIONS
, G. Jay Kerns
2011-01-01
A class of bivariate distributions which generalizes the Sarmanov class is introduced. This class possesses a simple analytical form and desirable dependence properties. The admissible range for association parameter for given bivariate distributions are derived and the range for correlation coefficients are also presented.
Application of Peano Kernels Theorem to Bivariate Product Cubature
We fully utilize Peano kernels theorem to the error estimates of bivariate self-validating integration based on product cubature rules. This application enable adaptive local error estimates. We demonstrate the characteristics and effectiveness of our methods by comparing it with a conventional integrator
Reduction of statistic error in Mihalczo subcriticality measurement
Hazama, Taira [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1998-08-01
The theoretical formula for the statistical error estimation in Mihalczo method was derived, and the dependence of the error were investigated on the facility to be measured and on the parameter in the data analysis. The formula was derived based on the reactor noise theory and the error theory for the frequency analysis, and found that the error depends on such parameters as the prompt neutron decay constant, detector efficiencies, and the frequency bandwidth. Statistical errors estimated with the formula was compared with experimental values and verified to be reasonable. Through parameter surveys, it is found that there is an optimum combination of the parameters to reduce the magnitude of the errors. In the experiment performed in DCA subcriticality measurement facility, it is estimated experimentally that the measurement requires 20 minutes to obtain the statistic error of 1% for the keff 0.9. According to the error theory, this might be reduced to 3 seconds in the aqueous fuel system typical in fuel reprocessing plant. (J.P.N.)
ASSESSING THE DYNAMIC ERRORS OF COORDINATE MEASURING MACHINES
1998-01-01
The main factors affecting the dynamic errors of coordinate measuring machines are analyzed. It is pointed out that there are two main contributors to the dynamic errors: One is the rotation of the elements around the joints connected with air bearings and the other is the bending of the elements caused by the dynamic inertial forces. A method for obtaining the displacement errors at the probe position from dynamic rotational errors is presented. The dynamic rotational errors are measured with inductive position sensors and a laser interferometer. The theoretical and experimental results both show that during the process of fast probing, due to the dynamic inertial forces, there are not only large rotation of the elements around the joints connected with air bearings but also large bending of the weak elements themselves.
Error tolerance of topological codes with independent bit-flip and measurement errors
Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.
2016-07-01
Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.
An introduction to the measurement errors and data handling
Some usual methods to estimate and correlate measurement errors are presented. An introduction to the theory of parameter determination and goodness of the estimates is also presented. Some examples are discussed. (author)
Ionospheric error analysis in gps measurements
G. Pugliano
2008-06-01
Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.
Measuring worst-case errors in a robot workcell
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors
Income convergence in South Africa: Fact or measurement error?
Lechtenfeld, Tobias; Zoch, Asmus
2014-01-01
This paper asks whether income mobility in South Africa over the last decade has indeed been as impressive as currently thought. Using new national panel data (NIDS), substantial measurement error in reported income data is found, which is further corroborated by a provincial income data panel (KIDS). By employing an instrumental variables approach using two different instruments, measurement error can be quantified. Specifically, self-reported income in the survey data is shown to suffer fro...
Sample size and power calculations for correlations between bivariate longitudinal data
Comulada, W. Scott; Weiss, Robert E.
2010-01-01
The analysis of a baseline predictor with a longitudinally measured outcome is well established and sample size calculations are reasonably well understood. Analysis of bivariate longitudinally measured outcomes is gaining in popularity and methods to address design issues are required. The focus in a random effects model for bivariate longitudinal outcomes is on the correlations that arise between the random effects and between the bivariate residuals. In the bivariate random effects model, ...
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Measurement error caused by spatial misalignment in environmental epidemiology
Gryparis, Alexandros; Paciorek, Christopher J.; Zeka, Ariana; Schwartz, Joel; Coull, Brent A.
2009-01-01
In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area. PMID:18927119
Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando
2016-01-01
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862
Measurement uncertainty evaluation of conicity error inspected on CMM
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Errors in Measurement of Microwave Interferograms Using Antenna Matrix
P. Hudec; Hoffmann, K; Zela, J.
2008-01-01
New antenna matrices for both scalar and vector measurement of microwave interferograms for the frequency 2.45 GHz were developed and used for an analysis of sources of measurement errors. Influence of mutual coupling between individual antennas in an antenna matrix on a measurement of microwave interferograms, particularly on a measurement of interferogram minimum values, was studied. Simulations and measurements of interferograms, proposal of a new calibration procedure and correction metho...
Persistent Leverage in Portfolio Sorts: An Artifact of Measurement Error?
Mueller, Michael
2014-01-01
Studies such as Lemmon, Roberts and Zender (2008) demonstrate how stable firms' capital structures are over time, and raise the question of whether new theories of capital structure are needed to explain these phenomena. In this paper, I show that trade-off theory-based empirical proxies that are observed with error offer an alternative explanation for the persistence in portfolio-leverage levels. Measurement error noise equal to 80% of the cross-sectional variation in the market to book rati...
Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)
Beam induced vacuum measurement error in BEPC II
无
2011-01-01
When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.
Measurement error of waist circumference: gaps in knowledge.
Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; Mechelen, W. van
2013-01-01
Objective: It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design: To ident
Measurement error of waist circumference: Gaps in knowledge
Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van
2013-01-01
Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To identif
ALGORITHM FOR SPHERICITY ERROR AND THE NUMBER OF MEASURED POINTS
HE Gaiyun; WANG Taiyong; ZHAO Jian; YU Baoqin; LI Guoqin
2006-01-01
The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution,whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.
Morgenstern type bivariate Lindley Distribution
V S Vaidyanathan; Sharon Varghese, A
2016-01-01
In this paper, a bivariate Lindley distribution using Morgenstern approach is proposed which can be used for modeling bivariate life time data. Some characteristics of the distribution like moment generating function, joint moments, Pearson correlation coefficient, survival function, hazard rate function, mean residual life function, vitality function and stress-strength parameter R=Pr(Y
Analysis of Bivariate Extreme Values
Egeland Busuttil, Chris
2015-01-01
Results show that there is high agreement between the distribution of the bivariate ACER functions and the distribution of the copula models with ACER marginals for all time series. The distribution of the copula models with Gumbel marginals display great discrepancies to the distribution of the bivariate ACER functions. These disagreements are greatest for short time series, and decrease as the time series become longer.
Systematic errors in VVER-440 coolant temperature measurement
Stable operation of current nuclear power stations requires on-line temperature monitoring within the reactor. Experience with VVER power reactors suggests that a necessary condition for safe operation of a station containing VVER-440 is that the coolant temperature should be monitored at the outlet from the fuel-pin assemblies and in the main circulation loop. It is possible to reduce the error in the reactor temperature measurements to determine the heat production nonuniformity coefficients over the core more accurately, as well as the underheating of the coolant relative to the saturation temperature at the exit from a fuel-pin assembly, together with other thermophysical parameters important for safe and effective power station operation. Measurements within reactors may be accompanied by systematic deviations in the thermocouple readings that are comparable in magnitude with the limiting permissible errors. This paper discusses the most important components of the systematic deviations: errors due to calibration drift during use, errors due to radiation heating, and dynamic measurement error. The authors consider the basic features in the method of determining and balancing out the first of these components. 17 refs., 3 tabs
Earnings mobility and measurement error : a pseudo-panel approach
Antman, Francisca; McKenzie, David J.
2005-01-01
The degree of mobility in incomes is often seen as an important measure of the equality of opportunity in a society and of the flexibility and freedom of its labor market. However, estimation of mobility using panel data is biased by the presence of measurement error and nonrandom attrition from the panel. This study shows that dynamic pseudo-panel methods can be used to consistently estimate measures of absolute and conditional mobility when genuine panels are not available and in the presen...
Optimal measurement strategies for effective suppression of drift errors
Drifting of experimental setups with change in temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental setups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.
Estimation of discretization errors in contact pressure measurements.
Fregly, Benjamin J; Sawyer, W Gregory
2003-04-01
Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352
The effect of measurement error on surveillance metrics
Weaver, Brian Phillip [Los Alamos National Laboratory; Hamada, Michael S. [Los Alamos National Laboratory
2012-04-24
The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.
Dyadic Bivariate Wavelet Multipliers in L2(R2)
Zhong Yan LI; Xian Liang SHI
2011-01-01
The single 2 dilation wavelet multipliers in one-dimensional case and single A-dilation (where A is any expansive matrix with integer entries and |detA|＝2)wavelet multipliers in twodimensional case were completely characterized by Wutam Consortium(1998)and Li Z.,et al.(2010).But there exist no results on multivariate wavelet multipliers corresponding to integer expansive dilation.matrix with the absolute value of determinant not 2 in L2(R2).In this paper,we choose 2I2＝(0202)as the dilation matrix and consider the 2I2-dilation multivariate wavelet Ψ＝{ψ1,ψ2,ψ3}(which is called a dyadic bivariate wavelet)multipliers.Here we call a measurable function family f＝{f1,f2,f3}a dyadic bivariate wavelet multiplier if Ψ1＝{F-1(f1ψ1),F-1(f2ψ2),F-1(f3ψ3)} is a dyadic bivariate wavelet for any dyadic bivariate wavelet Ψ={ψ1,ψ2,ψ3},where(f)and,F-1 denote the Fourier transform and the inverse transform of function f respectively.We study dyadic bivariate wavelet multipliers,and give some conditions for dyadic bivariate wavelet multipliers.We also give concrete forms of linear phases of dyadic MRA bivariate wavelets.
Bayesian conformity assessment in presence of systematic measurement errors
Carobbi, Carlo; Pennecchi, Francesca
2016-04-01
Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow
Mohr, Johan Jacob; Madsen, Søren Nørvang
1999-01-01
and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow.......This article concerns satellite interferometric radar measurements of ice elevation and three-dimensional flow vectors. It describes sensitivity to (1) atmospheric path length changes, and other phase distortions, (2) violations of the stationary flow assumption, and (3) unknown vertical velocities...
Effects of measurement errors on microwave antenna holography
Rochblatt, David J.; Rahmat-Samii, Yahya
1991-01-01
The effects of measurement errors appearing during the implementation of the microwave holographic technique are investigated in detail, and many representative results are presented based on computer simulations. The numerical results are tailored for cases applicable to the utilization of the holographic technique for the NASA's Deep Space Network antennas, although the methodology of analysis is applicable to any antenna. Many system measurement topics are presented and summarized.
Estimation of coherent error sources from stabilizer measurements
Orsucci, Davide; Tiersch, Markus; Briegel, Hans J.
2016-04-01
In the context of measurement-based quantum computation a way of maintaining the coherence of a graph state is to measure its stabilizer operators. Aside from performing quantum error correction, it is possible to exploit the information gained from these measurements to characterize and then counteract a coherent source of errors; that is, to determine all the parameters of an error channel that applies a fixed—but unknown—unitary operation to the physical qubits. Such a channel is generated, e.g., by local stray fields that act on the qubits. We study the case in which each qubit of a given graph state may see a different error channel and we focus on channels given by a rotation on the Bloch sphere around either the x ̂, the y ̂, or the z ̂ axis, for which analytical results can be given in a compact form. The possibility of reconstructing the channels at all qubits depends nontrivially on the topology of the graph state. We prove via perturbation methods that the reconstruction process is robust and supplement the analytic results with numerical evidence.
Reducing systematic errors in measurements made by a SQUID magnetometer
A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly
Reducing systematic errors in measurements made by a SQUID magnetometer
Kiss, L.F., E-mail: kissl@szfki.hu; Kaptás, D.; Balogh, J.
2014-11-15
A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co{sub 1.9}Fe{sub 1.1}Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly.
Quantification and handling of sampling errors in instrumental measurements: a case study
Andersen, Charlotte Møller; Bro, R.
2004-01-01
Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serious...... impact on the results when the instrumental measurements are used for multivariate regression and prediction. This paper gives examples of how errors influencing the predictions obtained by a multivariate regression model can be quantified and handled. Only random errors are considered here, while in...... certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...
Time variance effects and measurement error indications for MLS measurements
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
Improving GDP measurement: a measurement-error perspective
S. Boragan Aruoba; Francis X. Diebold; Jeremy J. Nalewaik; Frank Schorfheide; Dongho Song
2013-01-01
We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties -- particularly as regards serial correlation -- differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP.
Reducing systematic errors in measurements made by a SQUID magnetometer
Kiss, L. F.; Kaptás, D.; Balogh, J.
2014-11-01
A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.
Profit Maximization, Returns to Scale, and Measurement Error.
Lim, Hongil; Shumway, C. Richard
1992-01-01
A nonparametric analysis of agricultural production behavior was conducted for each of the contiguous forty-eight states for the period 1956-82 under the joint hypothesis of profit maximization, convex technology, and nonregressive technical change. Tests were conducted in each state for profit maximization and for constant returns to scale. Although considerable variability was observed among states, measurement errors of magnitudes common in secondary data yielded test results fully consist...
Correcting for measurement error in latent variables used as predictors
Schofield, Lynne Steuerle
2015-01-01
This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory, is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I fi...
Confounding and exposure measurement error in air pollution epidemiology
2011-01-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital statu...
Confounding and exposure measurement error in air pollution epidemiology
Sheppard, L.; Burnett, R T; Szpiro, A.A.; Kim, J.Y.; Jerrett, M; Pope, C; Brunekreef, B
2012-01-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital statu...
The error budget of the Dark Flow measurement
Atrio-Barandela, F.; Kashlinsky, A.; Ebeling, H.; Kocevski, D.; Edge, A.
2010-01-01
We analyze the uncertainties and possible systematics associated with the "Dark Flow" measurements using the cumulative Sunyaev-Zeldovich (SZ) effect combined with all-sky catalogs of clusters of galaxies. Filtering of all-sky cosmic microwave background maps is required to remove the intrinsic cosmological signal down to the limit imposed by cosmic variance. Contributions to the errors come from the remaining cosmological signal, which integrates down with the number of clusters, and the ins...
Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors
Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.
2016-06-01
Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.
The effect of posture on errors in gastric emptying measurements
Scintigraphic gastric emptying measurements were made with subjects supine and upright using a dual-detector rectilinear scanner. Previously reported variations of the depth of activity during the course of a study were again found with both postures. Although there was no significant mean depth change in the group when upright, some individual variations were substantial. Measurements with a gamma camera demonstrated similar changes of depth of stomach contents with seated subjects. The resulting variations of attenuation of the emergent radiation leads to appreciable errors in the emptying rates determined by unilateral detection. In about half the cases the mean movement of a 99Tcsup(m)-labelled solid phase marker exceeded 1 cm; such a movement led to an average 20% error in emptying rate determination by an anterior detector. Depth changes of a liquid marker were less marked, exceeding 0.5 cm in half the subjects; this movement gave rise to an average 6% error when 113Insup(m) was used as the tracer. (author)
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when
Ramalingam, R.; G. Anitha; J. Shanmugam
2009-01-01
This paper presents error modelling and error analysis of microelectromechnical systems (MEMS) inertial measurement unit (IMU) for a low-cost strapdown inertial navigation system (INS). The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has b...
Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures
Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole;
2014-01-01
Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure for...
Propagation of radiosonde pressure sensor errors to ozonesonde measurements
R. M. Stauffer
2013-08-01
Full Text Available Several previous studies highlight pressure (or equivalently, pressure altitude discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006–2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92 are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3 mixing ratio profiles (O3MR from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km. Ozone mixing ratio errors in the 7–15 hPa layer (29–32 km, a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold. Comparisons of total column O3 yield average differences of +1.6 DU (−1.1 to +4.9 DU 10th to 90th percentiles when the O3 is integrated to burst with addition of the McPeters and Labow (2012 above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (−1.1 to +2.2 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly
Development of an Abbe Error Free Micro Coordinate Measuring Machine
Qiangxian Huang
2016-04-01
Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.
The bivariate current status model
Groeneboom, P.
2013-01-01
For the univariate current status and, more generally, the interval censoring model, distribution theory has been developed for the maximum likelihood estimator (MLE) and smoothed maximum likelihood estimator (SMLE) of the unknown distribution function, see, e.g., [12], [7], [4], [5], [6], [10], [11] and [8]. For the bivariate current status and interval censoring models distribution theory of this type is still absent and even the rate at which we can expect reasonable estimators to converge...
Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling
Marinello, F.; Voltan, A.; Savio, E.; Carmignato, S.; De Chiffre, Leonardo
2010-01-01
This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...
Quantitative texton sequences for legible bivariate maps.
Ware, Colin
2009-01-01
Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs. PMID:19834229
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Characterization of measurement error sources in Doppler global velocimetry
Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.
2001-04-01
Doppler global velocimetry uses the absorption characteristics of iodine vapour to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development programme that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapour calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m s-1. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan; Prentice, Ross L.
2014-01-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator ...
On the Measurement of Privacy as an Attacker's Estimation Error
Rebollo-Monedero, David; Diaz, Claudia; Forné, Jordi
2011-01-01
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundame...
Statistical Inference for Partially Linear Regression Models with Measurement Errors
Jinhong YOU; Qinfeng XU; Bin ZHOU
2008-01-01
In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.
Bivariate control chart with copula
Lestari, Tika; Syuhada, Khreshna; Mukhaiyar, Utriweni
2015-12-01
Control chart is the main and powerful tool in statistical process control in order to detect and classify data, either in control or out of control. Its concept, basically, refers to the theory of prediction interval. Accordingly, in this paper, we aim at constructing of what so called predictive bivariate control charts, both classical and Copula-based ones. We argue that appropriate joint distribution function may be well estimated by employing Copula. A numerical analysis is carried out to illustrate that a Copula-based control chart outperforms than other.
Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision
Foote, Jonathan
Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...
Braun, Danielle
2013-01-01
The first part of this dissertation focuses on methods to adjust for measurement error in risk prediction models. In chapter one, we propose a nonparametric adjustment for measurement error in time to event data. Measurement error in time to event data used as a predictor will lead to inaccurate predictions. This arises in the context of self-reported family history, a time to event covariate often measured with error, used in Mendelian risk prediction models. Using validation data, we propos...
Cognitive error in the measurement of investment returns
Hayley, S.
2015-01-01
This thesis identifies and quantifies the impact of cognitive errors in certain aspects of investor decision-making. One error is that investors are unaware that the Internal Rate of Return (IRR) is a biased indicator of expected terminal wealth for any dynamic strategy where the amount invested is systematically related to the returns made to date. This error leads investors to use Value Averaging (VA). This thesis demonstrates that this is an inefficient strategy, since alternative strategi...
A Simulation Analysis of Bivariate Availability Models
Caruso, Elise M.
2000-01-01
Equipment behavior is often discussed in terms of age and use. For example, an automobile is frequently referred to 3 years old with 30,000 miles. Bivariate failure modeling provides a framework for studying system behavior as a function of two variables. This is meaningful when studying the reliability/availability of systems and equipment. This thesis extends work done in the area of bivariate failure modeling. Four bivariate failure models are selected for analysis. The study in...
Some R graphics for bivariate distributions
Klein, Ingo
2008-01-01
There is no package in R to plot bivariate distributions for discrete variables or variables given by classes. Therefore, with the help of the already implemented R routine persp R functions will be proposed for 3-D plots of the bivariate distribution of discrete variables, the so-called stereogram that generalizes the well-known histogram for cross-classified data and the approximative bivariate distribution function for cross-classified data.
Nonlinear analysis of bivariate data with cross recurrence plots
Marwan, N
2002-01-01
We extend the method of recurrence plots to cross recurrence plots (CRP) which enables a nonlinear analysis of bivariate data. To quantify CRPs, we introduce three measures of complexity mainly basing on diagonal structures in CRPs. The CRP analysis of prototypical model systems with nonlinear interactions demonstrates that this technique enables to find these nonlinear interelations from bivariate time series, whereas linear correlation tests do not. Applying the CRP analysis to climatological data, we find a complex relationship between rainfall and El Nino data.
The consequences of measurement error when estimating the impact of obesity on income
O'Neill, Donal; Sweetman, Olive
2013-01-01
This paper examines the consequences of using self-reported measures of BMI when estimating the effect of BMI on income for women using both Irish and US data. We find that self-reported BMI is subject to substantial measurement error and that this error deviates from classical measurement error. These errors cause the traditional least squares estimator to overestimate the relationship between BMI and income. We show that neither the conditional expectation estimator nor the instrumental var...
Comparative temperature measurement errors in high thermal gradient fields
Accurate measurement of temperature in tumor and surrounding host tissue remains one of the major difficulties in clinical hyperthermia. The need for nonperturbable probes that can operate in electromagnetic and ultrasonic fields has been well established. Less attention has been given to the need for nonperturbing probes-temperature probes that do not alter the thermal environments they are sensing. This is important in situations where the probe traverses relatively high temperature gradients such as those resulting from significant differentials in local SAR, blood flow, and thermal properties. Errors are reduced when the thermal properties of the probe and tumor tissue are matched. The ideal transducer would also have low thermal mass and microwave and/or ultrasonic absorption characteristics matched to tissue. Perturbations induced in the temperature gradient field by virtue of axial conduction along the probe shaft were compared for several of the available multisensor temperature probes as well as several prototype multisensor temperature transducers. Well calibrated thermal gradients ranging from 0 to 100C/cm were produced with a stability of 2 millidegrees per minute. Probes compared were: the three sensor YSI thermocouple probe, 14 sensor thermistor needle probe, 10 sensor ion-implanted silicon substrate resistance probe, and the multisensor resistance probe, and the multisensor resistance probe fabricated using microelectronic techniques
Large-scale spatial angle measurement and the pointing error analysis
Xiao, Wen-jian; Chen, Zhi-bin; Ma, Dong-xi; Zhang, Yong; Liu, Xian-hong; Qin, Meng-ze
2016-05-01
A large-scale spatial angle measurement method is proposed based on inertial reference. Common measurement reference is established in inertial space, and the spatial vector coordinates of each measured axis in inertial space are measured by using autocollimation tracking and inertial measurement technology. According to the spatial coordinates of each test vector axis, the measurement of large-scale spatial angle is easily realized. The pointing error of tracking device based on the two mirrors in the measurement system is studied, and the influence of different installation errors to the pointing error is analyzed. This research can lay a foundation for error allocation, calibration and compensation for the measurement system.
Statistical Test for Bivariate Uniformity
Zhenmin Chen
2014-01-01
Full Text Available The purpose of the multidimension uniformity test is to check whether the underlying probability distribution of a multidimensional population differs from the multidimensional uniform distribution. The multidimensional uniformity test has applications in various fields such as biology, astronomy, and computer science. Such a test, however, has received less attention in the literature compared with the univariate case. A new test statistic for checking multidimensional uniformity is proposed in this paper. Some important properties of the proposed test statistic are discussed. As a special case, the bivariate statistic test is discussed in detail in this paper. The Monte Carlo simulation is used to compare the power of the newly proposed test with the distance-to-boundary test, which is a recently published statistical test for multidimensional uniformity. It has been shown that the test proposed in this paper is more powerful than the distance-to-boundary test in some cases.
Approximation of bivariate copulas by patched bivariate Fréchet copulas
Zheng, Yanting
2011-03-01
Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.
Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf
2014-01-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simula...
Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf
2015-01-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate...
ERROR PROCESSING METHOD OF CYCLOIDAL GEAR MEASUREMENT USING 3D COORDINATES MEASURING MACHINE
1998-01-01
An error processing method is presented based on optimization theory and microcomputer technique which can be successfully used in the cycloidal gear measurement on three dimensional coordinates measuring machine (CMM). In the procedure, the minimum quadratic sum of the normal deviation is used as the object function and the equidistant curve is dealed with instead of the teeth profile. CMM is a high accurate measuring machine which can provide a way to evaluate the accuracy of the cycloidal gear completely.
A simple bivariate count data regression model
Shiferaw Gurmu; John Elder
2007-01-01
This paper develops a simple bivariate count data regression model in which dependence between count variables is introduced by means of stochastically related unobserved heterogeneity components. Unlike existing commonly used bivariate models, we obtain a computationally simple closed form of the model with an unrestricted correlation pattern.
Extreme behavior of bivariate elliptical distributions
Asimit, A. V.; Jones, B
2007-01-01
This paper exploits a stochastic representation of bivariate elliptical distributions in order to obtain asymptotic results which are determined by the tail behavior of the generator. Under certain specified assumptions, we present the limiting distribution of componentwise maxima, the limiting upper copula, and a bivariate version of the classical peaks over threshold result.
Stochastic ordering of bivariate elliptical distributions
Landsman, Z; Tsanakas, A.
2006-01-01
It is shown that for elliptically distributed bivariate random vectors, the riskiness and dependence strength of random portfolios, in the sense of the univariate convex and bivariate concordance stochastic orders respectively, can be simply characterised in terms of the vector's Σ-matrix.
Angle measurement error and compensation for decentration rotation of circular gratings
CHEN Xi-jun; WANG Zhen-huan; ZENG Qing-shuang
2010-01-01
As the geometric center of circular grating does not coincide with the rotation center,the angle measurement error of circular grating is analyzed.Based on the moire fringe equations in decentration condition,the mathematical model of angle measurement error is derived.It is concluded that the deeentration between the centre of circular grating and the center of revolving shaft leads to the first-harmonic error of angle measurement.The correctness of the result is proved by experimental data.The method of error compensation is presented,and the angle measurement accuracy of the circular grating is effectively improved by the error compensation.
M. D. Wilson
2014-08-01
Full Text Available The Surface Water and Ocean Topography (SWOT mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash–Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2014-08-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.
Baran, Sándor; Möller, Annette
2016-06-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo
2016-01-01
The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385
Measurement of four-degree-of-freedom error motions based on non-diffracting beam
Zhai, Zhongsheng; Lv, Qinghua; Wang, Xuanze; Shang, Yiyuan; Yang, Liangen; Kuang, Zheng; Bennett, Peter
2016-05-01
A measuring method for the determination of error motions of linear stages based on non-diffracting beams (NDB) is presented. A right-angle prism and a beam splitter are adopted as the measuring head, which is fixed on the moving stage in order to sense the straightness and angular errors. Two CCDs are used to capture the NDB patterns that are carrying the errors. Four different types error s, the vertical straightness error and three rotational errors (the pitch, roll and yaw errors), can be separated and distinguished through theoretical analysis of the shift in the centre positions in the two cameras. Simulation results show that the proposed method using NDB can measure four-degrees-of-freedom errors for the linear stage.
Pivot and cluster strategy: a preventive measure against diagnostic errors
Shimizu T
2012-11-01
Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing
Pyy-Martikainen, Marjo; Rendtel, Ulrich
2009-01-01
"It is well known that retrospective survey reports of event histories are affected by measurement errors. Yet little is known about the determinants of measurement errors in event history data or their effects on event history analysis. Making use of longitudinal register data linked at person-level with longitudinal survey data, we provide novel evidence about 1. type and magnitude of measurement errors in survey reports of event histories, 2. validity of classical assumptions about measure...
Neutron-induced soft error rate measurements in semiconductor memories
Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed
R. Ramalingam
2009-11-01
Full Text Available This paper presents error modelling and error analysis of microelectromechnical systems (MEMS inertial measurement unit (IMU for a low-cost strapdown inertial navigation system (INS. The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has been introduced to remove the high frequency noise that affects the sensors to obtain the original values of angular rates and accelerations with less noise. This increases the accuracy of the strapdown INS. The results show the effect of errors in the output of sensors, easy interpretation of random errors by Allan variance, the increase in the accuracy when wavelet decomposition is used for denoising inertial sensor raw data.Defence Science Journal, 2009, 59(6, pp.650-658, DOI:http://dx.doi.org/10.14429/dsj.59.1571
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross
Integrated Geometric Errors Simulation, Measurement and Compensation of Vertical Machining Centres
Gohel, C.K.; Makwana, A.H.
2014-01-01
This Paper presents research on geometric errors of simulated geometry which are measured and compensate in vertical machining centres. There are many errors in CNC machine tools have effect on the accuracy and repeatability of manufacture. Most of these errors are based on specific parameters such as the strength and the stress, the dimensional deviations of the structure of the machine tool, thermal variations, cutting force induced errors and tool wear. In this paper machining system that ...
Information-theoretic approach to quantum error correction and reversible measurement
Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard
1997-01-01
Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.
Ivan M Roitt; Torben Lund; Callaghan, Martina F.; Richard H Bayford
2010-01-01
Bioimpedance measurements are of great use and can provide considerable insight into biological processes. However, there are a number of possible sources of measurement error that must be considered. The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample. Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. ...
Sensor Interaction as a Source of the Electromagnetic Field Measurement Error
Hartansky R.
2014-12-01
Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.
Mønness, Erik Neslein
2015-01-01
bivariate diameter and height distribution yields a unified model of a forest stand. The bivariate Johnson’s System bounded distribution and the bivariate power-normal distribution are explored. The power-normal originates from the well-known Box-Cox transformation. As evaluated by the bivariate Kolmogorov-Smirnov distance, the bivariate power-normal distribution seems to be superior to the bivariate Johnson’s System bounded distribution. The conditional median height given the diameter is...
Study of systematic errors in the luminosity measurement
The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α2) QED correction in leading-log approximation. (J.P.N.)
Study of systematic errors in the luminosity measurement
Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics
1993-04-01
The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).
Systematic errors in cosmic microwave background polarization measurements
O'Dea, D; Johnson, B R; Dea, Daniel O'; Challinor, Anthony
2006-01-01
We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Muller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors throug...
Methodical errors of measurement of the human body tissues electrical parameters
Antoniuk, O.; Pokhodylo, Y.
2015-01-01
Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.
Error analysis of rigid body posture measurement system based on circular feature points
Huo, Ju; Cui, Jishan; Yang, Ning
2015-02-01
For monocular vision pose parameters determine the problem, feature-based target feature points on the plane quadrilateral, an improved two-stage iterative algorithm is proposed to improve the optimization of rigid body posture measurement calculating model. Monocular vision rigid body posture measurement system is designed; experimentally in each coordinate system determined coordinate a unified method to unify the each feature point measure coordinates; theoretical analysis sources of error from rigid body posture measurement system simulation experiments. Combined with the actual experimental analysis system under the condition of simulation error of pose accuracy of measurement, gives the comprehensive error of measurement system, for improving measurement precision of certain theoretical guiding significance.
A new bivariate negative binomial regression model
Faroughi, Pouya; Ismail, Noriszura
2014-12-01
This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.
Constructions for a bivariate beta distribution
Olkin, Ingram; Trikalinos, Thomas A.
2014-01-01
The beta distribution is a basic distribution serving several purposes. It is used to model data, and also, as a more flexible version of the uniform distribution, it serves as a prior distribution for a binomial probability. The bivariate beta distribution plays a similar role for two probabilities that have a bivariate binomial distribution. We provide a new multivariate distribution with beta marginal distributions, positive probability over the unit square, and correlations over the full ...
Stationarity of Bivariate Dynamic Contagion Processes
Angelos Dassios; Xin Dong
2014-01-01
The Bivariate Dynamic Contagion Processes (BDCP) are a broad class of bivariate point processes characterized by the intensities as a general class of piecewise deterministic Markov processes. The BDCP describes a rich dynamic structure where the system is under the influence of both external and internal factors modelled by a shot-noise Cox process and a generalized Hawkes process respectively. In this paper we mainly address the stationarity issue for the BDCP, which is important in applica...
Bivariate Interpolation by Splines and Approximation Order
Nürnberger, Günther
1996-01-01
We construct Hermite interpolation sets for bivariate spline spaces of arbitrary degree and smoothness one on non-rectangular domains with uniform type triangulations. This is done by applying a general method for constructing Lagrange interpolation sets for bivariate spline spaecs of arbitrary degree and smoothness. It is shown that Hermite interpolation yields (nearly) optimal approximation order. Applications to data fitting problems and numerical examples are given.
Some theory of bivariate risk attitude
Marta Cardin; Paola Ferretti
2004-01-01
In past years the study of the impact of risk attitude among risks has become a major topic, in particular in Decision Sciences. Subsequently the attention was devoted to the more general case of bivariate random variables. The first approach to multivariate risk aversion was proposed by de Finetti (1952) and Richard (1975) and it is related to the bivariate case. More recently, multivariate risk aversion has been studied by Scarsini (1985, 1988, 1999). Nevertheless even if decision problems ...
Pollack, A. Z.; Perkins, N.J.; Mumford, S. L.; A. Ye; Schisterman, E.F.
2012-01-01
Utilizing multiple biomarkers is increasingly common in epidemiology. However, the combined impact of correlated exposure measurement error, unmeasured confounding, interaction, and limits of detection (LODs) on inference for multiple biomarkers is unknown. We conducted data-driven simulations evaluating bias from correlated measurement error with varying reliability coefficients (R), odds ratios (ORs), levels of correlation between exposures and error, LODs, and interactions. Blood cadmium a...
Compensation method for the alignment angle error of a gear axis in profile deviation measurement
In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear. (paper)
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125
A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation
Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei
2015-10-01
In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.
Non-destructive examinations (NDE) on irradiated PWR fuel rods have been performed since 1992 at the CEA/Cadarache Research Centre. Among the different controls performed, measurement of the zirconia thickness provides useful information on the axial and angular distribution of corrosion down the rod. This is necessary to compare the sensitivity of different cladding types with the creation of zirconia, as well as to detect and measure effects such as local corrosion. A dedicated apparatus based on eddy currents was used to measure the zirconia thicknesses. To verify the accuracy of our measurements, we compared the measurement results with the metallographic measurements of 39 samples. It was observed that the non-destructive measurements always overestimated the thickness of zirconia. The mean value of this systematic error was about 4 μm. We therefore tried to identify the origin of this error. We first observed that the sensor position was crucial. It must be in the exact same position for both the standard (tube section) and the rods. A poorly-positioned sensor on the rod produces overestimated measurement values. Other sources of uncertainty may also explain the difference with the exact values: first, the cladding of the standard was not irradiated. We know that some physical characteristics of cladding change during irradiation, in particular electrical conductivity. We do not know how this affects our measurement. Secondly, the rods still contained some decay heat. Thus, the temperature of the rod cladding could differ from the temperature of the standard. The electrical conductivity of the cladding and thus the eddy current response could be different. The sensor itself could also be affected by the temperature. We have performed several experiments on both heated cladding (not irradiated) and irradiated PWR fuel rods inside the hot cell. Based on the results of these tests and in agreement with our feedback, it was found that the device used in the
Lacroix, B.; Martella, T.; Pras, M.; Masson-Fauchier, M. [CEA/DEN/CAD/DEC/SA3C/Legend (France); Fayette, L. [CEA/DEN/CAD/DEC/SA3C/LEMCI (France)
2011-07-01
Non-destructive examinations (NDE) on irradiated PWR fuel rods have been performed since 1992 at the CEA/Cadarache Research Centre. Among the different controls performed, measurement of the zirconia thickness provides useful information on the axial and angular distribution of corrosion down the rod. This is necessary to compare the sensitivity of different cladding types with the creation of zirconia, as well as to detect and measure effects such as local corrosion. A dedicated apparatus based on eddy currents was used to measure the zirconia thicknesses. To verify the accuracy of our measurements, we compared the measurement results with the metallographic measurements of 39 samples. It was observed that the non-destructive measurements always overestimated the thickness of zirconia. The mean value of this systematic error was about 4 {mu}m. We therefore tried to identify the origin of this error. We first observed that the sensor position was crucial. It must be in the exact same position for both the standard (tube section) and the rods. A poorly-positioned sensor on the rod produces overestimated measurement values. Other sources of uncertainty may also explain the difference with the exact values: first, the cladding of the standard was not irradiated. We know that some physical characteristics of cladding change during irradiation, in particular electrical conductivity. We do not know how this affects our measurement. Secondly, the rods still contained some decay heat. Thus, the temperature of the rod cladding could differ from the temperature of the standard. The electrical conductivity of the cladding and thus the eddy current response could be different. The sensor itself could also be affected by the temperature. We have performed several experiments on both heated cladding (not irradiated) and irradiated PWR fuel rods inside the hot cell. Based on the results of these tests and in agreement with our feedback, it was found that the device used in the
Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.
2015-12-01
Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.
Compensation method for the alignment angle error in pitch deviation measurement
Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei
2016-05-01
When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD
Yao, Yuan; Niu, Qunjie; Liang, Kun
2016-09-01
Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.
Errors in anthropometric measurements in neonates and infants
D Harrison
2001-09-01
Full Text Available The accuracy of methods used in Cape Town hospitals and clinics for the measurement of weight, length and age in neonates and infants became suspect during a survey of 12 local authority and 5 private sector clinics in 1994-1995 (Harrison et al. 1998. A descriptive prospective study to determine the accuracy of these methods in neonates at four maternity hospitals [ 2 public and 2 private] and infants at four child health clinics of the Cape Town City Council was carried out. The main outcome measures were an assessment of three currently used methods namely to measure crown-heel length with a measuring board, a mat and a tape measure; a comparison of weight differences when an infant is fully clothed, naked and in napkin only; and the differences in age estimated by calendar dates and by a specially designed electronic calculator. The results showed that the current methods which are used to measure infants in Cape Town vary widely from one institution to another. Many measurements are inaccurate and there is a real need for uniformity and accuracy. This can only be implemented by an effective education program so as to ensure that accurate measurements are used in monitoring the health of young children in Cape Town and elsewhere.
Quantifying Error in Survey Measures of School and Classroom Environments
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
The effect of measurement error on the dose-response curve.
Yoshimura, I
1990-01-01
In epidemiological studies for an environmental risk assessment, doses are often observed with errors. However, they have received little attention in data analysis. This paper studies the effect of measurement errors on the observed dose-response curve. Under the assumptions of the monotone likelihood ratio on errors and a monotone increasing dose-response curve, it is verified that the slope of the observed dose-response curve is likely to be gentler than the true one. The observed variance...
Design and application of location error teaching aids in measuring and visualization
Yu Fengning; Li Lei; Guo Jian; Mai Songman; Shi Jiashun
2015-01-01
As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the positi...
Shi Qiang Liu; Rong Zhu
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily...
Ackerman-Alexeeff, Stacey Elizabeth
2013-01-01
Measurement error is an important issue in studies of environmental epidemiology. We considered the effects of measurement error in environmental covariates in several important settings affecting current public health research. Throughout this dissertation, we investigate the impacts of measurement error and consider statistical methodology to fix that error.
Tilt error in cryospheric surface radiation measurements at high latitudes: a model study
Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve
2016-03-01
We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.
A simple approximation to the bivariate normal distribution with large correlation coefficient
Albers, Willem; Kallenberg, Wilbert C.M.
1994-01-01
The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the error terms.
The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement
Measurement and analysis of typical motion error traces from a circular test
2008-01-01
The circular test provides a rapid and efficient way of measuring the contouring accuracy of a machine tool.To get the actual point coordinate in the work plane,an improved measurement instrument - a new ball bar test system - is presented in this paper to identify both the radial error and the rotation angle error when the machine is manipulated to move in circular traces.Based on the measured circular error,a combination of Fourier components is chosen to represent the systematic form error that fluctuates in the radial direction.The typical motion errors represented by the corresponding Fourier components can thus be identified.The values for machine compensation can be calculated and adjusted until the desired results are achieved.
Comparing objective and subjective error measures for color constancy
M.P. Lucassen; A. Gijsenij; T. Gevers
2008-01-01
We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color correc
P. Zimourtopoulos
2007-06-01
Full Text Available The objective was to study uncertainty in antenna input impedance resulting from full one-port Vector Network Analyzer (VNA measurements. The VNA process equation in the reflection coefficient ÃÂ of a load, its measurement m and three errors Es, determinable from three standard loads and their measurements, was considered. Differentials were selected to represent measurement inaccuracies and load uncertainties (Differential Errors. The differential operator was applied on the process equation and the total differential error dÃÂ for any unknown load (Device Under Test DUT was expressed in terms of dEs and dm, without any simplification. Consequently, the differential error of input impedance Z -or any other physical quantity differentiably dependent on ÃÂ- is expressible. Furthermore, to express precisely a comparison relation between complex differential errors, the geometric Differential Error Region and its Differential Error Intervals were defined. Practical results are presented for an indoor UHF ground-plane antenna in contrast with a common 50 ÃŽÂ© DC resistor inside an aluminum box. These two built, unshielded and shielded, DUTs were tested against frequency under different system configurations and measurement considerations. Intermediate results for Es and dEs characterize the measurement system itself. A number of calculations and illustrations demonstrate the application of the method.
Andrew Phiri
2013-01-01
Is the SARB’s inflation target of 3-6% compatible with the 6% economic growth objective set by ASGISA? Estimations of inflation-growth bivariate Threshold Vector Autoregressive with corresponding bivariate Threshold Vector Error Correction (BTVEC-BTVAR) econometric models for sub-periods coupled with the South African inflation-growth experience between 1960 and 2010; suggest on optimal inflation-growth combinations for South African data presenting a two-fold proposition. Firstly, for the pe...
Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement
WANG Jiongqi; XIONG Kai; ZHOU Haiyin
2012-01-01
The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.
Conservative error measures for classical and quantum metrology
Tsang, Mankei
2016-01-01
The classical and quantum Cram\\'er-Rao bounds have become standard measures of parameter-estimation uncertainty for a variety of sensing and imaging applications in recent years, but their assumption of unbiased estimators potentially undermines their significance as fundamental limits. In this note we advocate a Bayesian approach with Van Trees inequalities and worst-case priors to overcome the problem. Applications to superlocalization and gravitational-wave parameter estimation are discussed.
Some Physical Errors of X-Ray Texture Measurements
Perlovich, Yu.
1996-01-01
Typical experimental situations of texture measurements are demonstrated involving failure to take into account some physical factors responsible for an inadequate texture description or imaginary texture changes. Among these factors there are inevitable texture inhomogeneities, inhomogeneous distribution of defects in deformed metal materials and resulting inhomogeneous lattice perfection by their heat treatment. It is shown that a formal approach to texture analysis does not allow to reveal...
Buscha, Franz; Conte, Anna
2010-01-01
This paper investigates the relationship between educational attainment and truancy. Using data from the Youth Cohort Study of England and Wales, we estimate the causal impact that truancy has on educational attainment at age 16. Problematic is that both truancy and attainment are measured as ordered responses requiring a bivariate ordered probit model to account for the potential endogeneity of truancy. Furthermore, we extent the 'naïve' bivariate ordered probit estimator to include mixed ef...
Incomplete Bivariate Fibonacci and Lucas -Polynomials
Dursun Tasci
2012-01-01
Full Text Available We define the incomplete bivariate Fibonacci and Lucas -polynomials. In the case =1, =1, we obtain the incomplete Fibonacci and Lucas -numbers. If =2, =1, we have the incomplete Pell and Pell-Lucas -numbers. On choosing =1, =2, we get the incomplete generalized Jacobsthal number and besides for =1 the incomplete generalized Jacobsthal-Lucas numbers. In the case =1, =1, =1, we have the incomplete Fibonacci and Lucas numbers. If =1, =1, =1, =⌊(−1/(+1⌋, we obtain the Fibonacci and Lucas numbers. Also generating function and properties of the incomplete bivariate Fibonacci and Lucas -polynomials are given.
A simulation study of lognormal measurement error effect: Discrimination problem of radon and thoron
Several case-control studies have indicated an increased risk of lung cancer linked to indoor radon exposure. For a precise evaluation of radon-related lung cancer risk, however, contribution of thoron should be considered. There are a lot of studies in which passive type radon detectors are used without thoron discrimination techniques. Therefore, these passive type radon detectors may be strongly affected to the presence of thoron in case they are installed near a wall or floor as potential thoron sources. The thoron effect we consider here is an increase of radon signal in the radon detectors without the discrimination technique. This problem is classified as a part of measurement error problem in statistical literatures and the thoron is considered as a possible source of measurement error. In general, concentrations of radon and thoron follow a lognormal distribution. The effects of measurement error following normal distribution have been studied well, but there are few studies on measurement error following non-normal distribution. In order to evaluate the effect of measurement error due to thoron, we conducted a simulation study. We assumed a case-control study of lung cancer and indoor radon with hypothetical data where radon concentrations were measured with and without discrimination of thoron concentrations. We also assumed that logistic regression was used, and that the concentrations of radon and thoron were correlated, following lognormal distribution. The thoron disturbance in radon measurement resulted in an approximately 90% downward bias in the effect of radon, and this bias was almost constant when the parameters were varied. The downward bias was consistent with results from previous studies taking measurement error following normal distribution into account. It was confirmed that the effect of lognormal measurement error is concordant with the normal measurement error in this case. (author)
From Measurements Errors to a New Strain Gauge Design
Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco;
2015-01-01
Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...... such as clip-on extensometers. In the present work, this has been quantified through a numerical study for three different strain gauges. In addition, a significant effect of a thin polymer coating or biaxial layer in the erroneous using strain gauges has been observed. An erroneous which can be...
Bateson, Thomas F; Wright, J Michael
2010-08-01
Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor. PMID:20573838
Statistical estimation of flaw size measurement errors for steam generator inspection tools
Accurate sizing of flaws in steam generator tubes is a critical component of a maintenance and inspection program. Knowledge of the measurement error of an inspection tool is important for determining the flaw severity, assessing the tool vendors claimed accuracy, as a component of the tools development program, as a reference for other inspection tools or candidates, and for probability of detection assessments. Often, tool reporting of flaw sizes is compared with data obtained from a reference tool or from a destructive test. The reference tool or destructive test data are generally assumed free from measurement errors but in a true sense they may not be so. It is therefore an advantageous and useful exercise to determine individually the measurement errors of each measuring system used. Statistical procedures commonly used to assess the quality of inspection tools estimate the total scatter between any two instruments, neglecting repeated measurements or more than two tool situations. One possible advancement is the statistical decomposition of the total scatter given by the root mean square error/differential into measurement error components to be associated with each of the instruments used. Important recent developments, such as Bayesian estimators of measurement errors, are included in this expository article to familiarize the researchers working in the nuclear industry with the state-of-the-art. (author)
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
The influence of temperature and concentration measurement errors on experimental determination of mass and heat transfer coefficients is analysed. Calculus model of coefficients and of measurement errors, the experimental data obtained on the water isotopic distillation plant and the results of determinations are presented. The experimental distillation column, with inner diameter of 108 mm, have been equipped with B7 structured packing on a height of 14 m. This column offers the possibility to measure vapour temperature and isotopic concentration in 12 locations. For error propagation analysis, the parameters measured for each packing bed, namely temperature and isotopic concentration of the vapour, were used. A relation for calculation of maximum error of experimental determinations of mass and heat transoprt coefficients is given. The experimental data emphasize the 'ending effects' and regions with bad thermal insulation. (author)
Identification in a Generalization of Bivariate Probit Models with Endogenous Regressors
Sukjin Han; Edward J. Vytlacil
2013-01-01
This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure, including the normal copula as a special case, while the marginal distributions of the latent error terms are allowed to be arbitrary but known. This class of models includes bivariate probit models as a special case. The paper demonstrates that a...
New measurements of coil-related magnetic field errors on DIII-D
Non-axisymmetric (error) fields in tokamaks lead to a number of instabilities including so-called locked modes [J.T. Scoville, R.J. La Haye, Nucl. Fusion 43 (4) (2003) 250-257] and resistive wall modes (RWM) [A.M. Garofab, R.J. La Haye, J.T. Scoville, Nucl. Fusion 42 (11) (2002) 1335-1339] and subsequent loss of confinement. They can also cause errors in magnetic measurements made by point probes near the plasma edge, error in measurements made by magnetic field sensitive diagnostics, and they violate the assumption of axisymmetry in the analysis of data. Most notably, the sources of these error fields include shifts and tilts in the coil positions from ideal, coil leads, and nearby ferromagnetic materials excited by the coils. New measurements have been made of the n=1 coil-related field errors in the DIII-D plasma chamber. These measurements indicate that the errors due to the plasma shaping coil system are smaller than previously reported and no additional sources of anomalous fields were identified. Thus they fail to support the suggestion of an additional significant error field suggested by locked mode and RWM experiments
Steven D. Levitt
1995-01-01
A strong, negative empirical correlation exists between arrest rates and reported crime rates. While this relationship has often been interpreted as support for the deterrence hypothesis, it is equally consistent with incapacitation effects, and/or a spurious correlation that would be induced by measurement error in reported crime rates. This paper attempts to discriminate between deterrence, incapacitation, and measurement error as explanations for the empirical relationship between arrest r...
Does adjustment for measurement error induce positive bias if there is no true association?
Burstyn, Igor
2009-01-01
This article is a response to an off-the-record discussion that I had at an international meeting of epidemiologists. It centered on a concern, perhaps widely spread, that measurement error adjustment methods can induce positive bias in results of epidemiological studies when there is no true association. I trace the possible history of this supposition and test it in a simulation study of both continuous and binary health outcomes under a classical multiplicative measurement error model. A B...
The impact of measurement errors in the identification of regulatory networks
Sato João R
2009-12-01
Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor
Jian-Neng Wang; Jaw-Luen Tang
2010-01-01
A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.0...
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE
A. K. Tyavlovsky
2013-01-01
Full Text Available The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the second and the first harmonics’ amplitudes.
COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE
Tyavlovsky, A. K.; A. L. Zharin
2015-01-01
The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the se...
Bivariate dynamic probit models for panel data
Alfonso Miranda
2010-01-01
In this talk, I will discuss the main methodological features of the bivariate dynamic probit model for panel data. I will present an example using simulated data, giving special emphasis to the initial conditions problem in dynamic models and the difference between true and spurious state dependence. The model is fit by maximum simulated likelihood.
A truncated bivariate inverted Dirichlet distribution
Saralees Nadarajah
2013-05-01
Full Text Available A truncated version of the bivariate inverted dirichlet distribution is introduced. Unlike the inverted dirichlet distribution, this possesses finite moments of all orders and could therefore be a better model for certain practical situations. One such situation is discussed. The moments, maximum likelihood estimators and the Fisher information matrix for the truncated distribution are derived.
The Resultant of an Unmixed Bivariate System
Khetan, Amit
2002-01-01
This paper gives an explicit method for computing the resultant of any sparse unmixed bivariate system with given support. We construct square matrices whose determinant is exactly the resultant. The matrices constructed are of hybrid Sylvester and B\\'ezout type. We make use of the exterior algebra techniques of Eisenbud, Fl{\\o}ystad, and Schreyer.
Five-Parameter Bivariate Probability Distribution
Tubbs, J.; Brewer, D.; Smith, O. W.
1986-01-01
NASA technical memorandum presents four papers about five-parameter bivariate gamma class of probability distributions. With some overlap of subject matter, papers address different aspects of theories of these distributions and use in forming statistical models of such phenomena as wind gusts. Provides acceptable results for defining constraints in problems designing aircraft and spacecraft to withstand large wind-gust loads.
BIVARIATE REAL-VALUED ORTHOGONAL PERIODIC WAVELETS
Qiang Li; Xuezhang Liang
2005-01-01
In this paper, we construct a kind of bivariate real-valued orthogonal periodic wavelets. The corresponding decomposition and reconstruction algorithms involve only 8 terms respectively which are very simple in practical computation. Moreover, the relation between periodic wavelets and Fourier series is also discussed.
LU Xiaoxu; ZHONG Liyun; ZHANG Yimo
2007-01-01
Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.
Design and application of location error teaching aids in measuring and visualization
Yu Fengning
2015-01-01
Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.
An AFM-based methodology for measuring axial and radial error motions of spindles
This paper presents a novel atomic force microscopy (AFM)-based methodology for measurement of axial and radial error motions of a high precision spindle. Based on a modified commercial AFM system, the AFM tip is employed as a cutting tool by which nano-grooves are scratched on a flat surface with the rotation of the spindle. By extracting the radial motion data of the spindle from the scratched nano-grooves, the radial error motion of the spindle can be calculated after subtracting the tilting errors from the original measurement data. Through recording the variation of the PZT displacement in the Z direction in AFM tapping mode during the spindle rotation, the axial error motion of the spindle can be obtained. Moreover the effects of the nano-scratching parameters on the scratched grooves, the tilting error removal method for both conditions and the method of data extraction from the scratched groove depth are studied in detail. The axial error motion of 124 nm and the radial error motion of 279 nm of a commercial high precision air bearing spindle are achieved by this novel method, which are comparable with the values provided by the manufacturer, verifying this method. This approach does not need an expensive standard part as in most conventional measurement approaches. Moreover, the axial and radial error motions of the spindle can both be obtained, indicating that this is a potential means of measuring the error motions of the high precision moving parts of ultra-precision machine tools in the future. (paper)
Shi Lijuan; Chen Jinying; Li Zhaokun; Bian Huamei
2016-01-01
This paper measures the twenty one geometric errors of numerical control machining center with parameter identification through laser interferometer, the main contents illustrate the measurement system, measurement model and some testing results combined with specific experimental conditions, at the same time provide particular reference value on numerical control machine tool(NC machine tool) geometric precision detection.
Shi Lijuan
2016-01-01
Full Text Available This paper measures the twenty one geometric errors of numerical control machining center with parameter identification through laser interferometer, the main contents illustrate the measurement system, measurement model and some testing results combined with specific experimental conditions, at the same time provide particular reference value on numerical control machine tool(NC machine tool geometric precision detection.
Bivariate Poisson and Diagonal Inflated Bivariate Poisson Regression Models in R
Dimitris Karlis; Ioannis Ntzoufras
2005-01-01
In this paper we present an R package called bivpois for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models. An Expectation-Maximization (EM) algorithm is implemented. Inflated models allow for modelling both over-dispersion (or under-dispersion) and negative correlation and thus they are appropriate for a wide range of applications. Extensions of the algorithms for several other models are also discussed. Detailed guidance a...
Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the
The Minkowski dimension of the bivariate fractal interpolation surfaces
We present a new construction of continuous bivariate fractal interpolation surface for every set of data. Furthermore, we generalize this construction to higher dimensions. Exact values for the Minkowski dimension of the bivariate fractal interpolation surfaces are obtained
On some properties on bivariate Fibonacci and Lucas polynomials
Belbachir, Hacéne; Bencherif, Farid
2007-01-01
In this paper we generalize to bivariate polynomials of Fibonacci and Lucas, properties obtained for Chebyshev polynomials. We prove that the coordinates of the bivariate polynomials over appropriate basis are families of integers satisfying remarkable recurrence relations.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health. PMID:21823979
Ivan M Roitt
2010-01-01
Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes. However, there are a number of possible sources of measurement error that must be considered. The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample. Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up. It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber. Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.
Dyadic Bivariate Fourier Multipliers for Multi-Wavelets in L2(R2)
Zhongyan Li∗; Xiaodi Xu
2015-01-01
The single 2 dilation orthogonal wavelet multipliers in one dimensional case and single A-dilation (where A is any expansive matrix with integer entries and|detA|=2) wavelet multipliers in high dimensional case were completely characterized by the Wutam Consortium (1998) and Z. Y. Li, et al. (2010). But there exist no more results on orthogonal multivariate wavelet matrix multipliers corresponding integer expansive dilation matrix with the absolute value of determinant not 2 in L2(R2). In this paper, we choose as the dilation matrix and consider the 2I2-dilation orthogonal multivariate wavelet Y={y1,y2,y3}, (which is called a dyadic bivariate wavelet) multipliers. We call the 3×3 matrix-valued function A(s)=[ fi,j(s)]3×3, where fi,j are measurable functions, a dyadic bivariate matrix Fourier wavelet multiplier if the inverse Fourier transform of A(s)(cy1(s),cy2(s),cy3(s))⊤ = ( b g1(s), b g2(s), b g3(s))⊤ is a dyadic bivariate wavelet whenever (y1,y2,y3) is any dyadic bivariate wavelet. We give some conditions for dyadic matrix bivariate wavelet multipliers. The results extended that of Z. Y. Li and X. L. Shi (2011). As an application, we construct some useful dyadic bivariate wavelets by using dyadic Fourier matrix wavelet multipliers and use them to image denoising.
An in-process form error measurement system for precision machining
In-process form error measurement for precision machining is studied. Due to two key problems, opaque barrier and vibration, the study of in-process form error optical measurement for precision machining has been a hard topic and so far very few existing research works can be found. In this project, an in-process form error measurement device is proposed to deal with the two key problems. Based on our existing studies, a prototype system has been developed. It is the first one of the kind that overcomes the two key problems. The prototype is based on a single laser sensor design of 50 nm resolution together with two techniques, a damping technique and a moving average technique, proposed for use with the device. The proposed damping technique is able to improve vibration attenuation by up to 21 times compared to the case of natural attenuation. The proposed moving average technique is able to reduce errors by seven to ten times without distortion to the form profile results. The two proposed techniques are simple but they are especially useful for the proposed device. For a workpiece sample, the measurement result under coolant condition is only 2.5% larger compared with the one under no coolant condition. For a certified Wyko test sample, the overall system measurement error can be as low as 0.3 µm. The measurement repeatability error can be as low as 2.2%. The experimental results give confidence in using the proposed in-process form error measurement device. For better results, further improvement in design and tests are necessary
Error reduction by combining strapdown inertial measurement units in a baseball stitch
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren
1997-01-01
Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias...... errors can be largely compensated for by an absolute calibration of the transducers and inverse filtering that results in very small residual errors. Experimental results of this study indicate that these uncertainties will be in the order of one percent with respect to amplitude and two tenth of a...
Techniques for reducing error in the calorimetric measurement of low wattage items
Sedlacek, W.A.; Hildner, S.S.; Camp, K.L.; Cremers, T.L.
1993-08-01
The increased need for the measurement of low wattage items with production calorimeters has required the development of techniques to maximize the precision and accuracy of the calorimeter measurements. An error model for calorimetry measurements is presented. This model is used as a basis for optimizing calorimetry measurements through baseline interpolation. The method was applied to the heat measurement of over 100 items and the results compared to chemistry assay and mass spectroscopy.
Effect of patient positions on measurement errors of the knee-joint space on radiographs
Gilewska, Grazyna
2001-08-01
Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.
On Bivariate Generalized Exponential-Power Series Class of Distributions
Jafari, Ali Akbar; Roozegar, Rasool
2015-01-01
In this paper, we introduce a new class of bivariate distributions by compounding the bivariate generalized exponential and power-series distributions. This new class contains some new sub-models such as the bivariate generalized exponential distribution, the bivariate generalized exponential-poisson, -logarithmic, -binomial and -negative binomial distributions. We derive different properties of the new class of distributions. The EM algorithm is used to determine the maximum likelihood estim...
Characterizations of some bivariate models using reciprocal coordinate subtangents
Sreenarayanapurath Madhavan Sunoj; Sreejith Thoppil Bhargavan; Jorge Navarro
2014-01-01
In the present paper, we consider the bivariate version of reciprocal coordinate subtangent (RCST) and study its usefulness in characterizing some important bivariate models. In particular, characterization results are proved for a general bivariate model whose conditional distributions are proportional hazard rate models (see Navarro and Sarabia, 2011), Sarmanov family and Ali-Mikhail-Haq family of bivariate distributions. We also study the relationship between local dependence function an...
On Bivariate Exponentiated Extended Weibull Family of Distributions
Roozegar, Rasool; Jafari, Ali Akbar
2015-01-01
In this paper, we introduce a new class of bivariate distributions called the bivariate exponentiated extended Weibull distributions. The model introduced here is of Marshall-Olkin type. This new class of bivariate distributions contains several bivariate lifetime models. Some mathematical properties of the new class of distributions are studied. We provide the joint and conditional density functions, the joint cumulative distribution function and the joint survival function. Special bivariat...
Sarkar, Abhra
2014-10-02
We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.
Determination of error measurement by means of the basic magnetization curve
Lankin, M. V.; Lankin, A. M.
2016-04-01
The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.
Farm Level Nonparametric Analysis of Profit Maximization Behavior with Measurement Error
Zereyesus, Yacob Abrehe; Allen M. Featherstone; Langemeier, Michael R.
2009-01-01
This paper tests the farm level profit maximization hypothesis using a nonparametric production analysis approach allowing for measurement error in the input and output variables. All farms violated Varian’s deterministic Weak Axiom of Profit Maximization (WAPM). The magnitude of minimum critical standard errors required for consistency with profit maximization, convex technology production was smaller after allowing technological change during the sample period. Results indicate strong suppo...
Adam Gąska
2013-12-01
Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.
A newly conceived cylinder measuring machine and methods that eliminate the spindle errors
Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method. (paper)
Bounds for Trivariate Copulas with Given Bivariate Marginals
Quesada-Molina JoséJuan; Durante Fabrizio; Klement ErichPeter
2008-01-01
Abstract We determine two constructions that, starting with two bivariate copulas, give rise to new bivariate and trivariate copulas, respectively. These constructions are used to determine pointwise upper and lower bounds for the class of all trivariate copulas with given bivariate marginals.
Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin
2016-09-01
To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
Estimation of bias errors in angle-of-arrival measurements using platform motion
Grindlay, A.
1981-08-01
An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.