WorldWideScience

Sample records for bivariate measurement error

  1. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  2. On errors of a unified family of approximation forms of bivariate continuous functions

    OpenAIRE

    Chung-Siung Kao

    2003-01-01

    Approximation forms for a regular bivariate functions $f(x,y)$ were obtained by putting expectation on a convergent bivariate stochastic sequence for which some proper error bounds are herein derived to evaluate the applicability of the approximation forms when actually applied to be approximates of regular bivariate functions.

  3. Dependence Measures in Bivariate Gamma Frailty Models

    OpenAIRE

    van den Berg, Gerard J.; Effraimidis, Georgios

    2014-01-01

    Bivariate duration data frequently arise in economics, biostatistics and other areas. In "bivariate frailty models", dependence between the frailties (i.e., unobserved determinants) induces dependence between the durations. Using notions of quadrant dependence, we study restrictions that this imposes on the implied dependence of the durations, if the frailty terms act multiplicatively on the corresponding hazard rates. Marginal frailty distributions are often taken to be gamma distributions. ...

  4. A New Measure Of Bivariate Asymmetry And Its Evaluation

    International Nuclear Information System (INIS)

    In this paper we propose a new measure of bivariate asymmetry, based on conditional correlation coefficients. A decomposition of the Pearson correlation coefficient in terms of its conditional versions is studied and an example of application of the proposed measure is given.

  5. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  6. A Simple Approximation for Bivariate Normal Integral Based on Error Function and its Application on Probit Model with Binary Endogenous Regressor

    OpenAIRE

    Wen-Jen Tsay; Peng-Hsuan Ke

    2009-01-01

    A simple approximation for the bivariate normal cumulative distribution function (BNCDF) based on the error function is derived. The worst error of our method is found to four decimal places under various configurations considered in this paper's Table 1. This finding is much better than that in Table 1 of Cox and Wermuth (1991) and in Table 1 of Lin (1995) where the worst error of both tables is up to 3 decimal places. We also apply the proposed method to approximate the likelihood function ...

  7. Job Mobility and Measurement Error

    OpenAIRE

    Bergin, Adele

    2011-01-01

    This thesis consists of essays investigating job mobility and measurement error. Job mobility, captured here as a change of employer, is a striking feature of the labour market. In empirical work on job mobility, researchers often depend on self-reported tenure data to identify job changes. There may be measurement error in these responses and consequently observations may be misclassified as job changes when truly no change has taken place and vice versa. These observations serve as a starti...

  8. On bivariate geometric distribution

    OpenAIRE

    Jayakumar, K.; Davis Antony Mundassery

    2013-01-01

    Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  9. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  10. EWMA Chart and Measurement Error

    OpenAIRE

    Maravelakis, Petros; Panaretos, John; Psarakis, Stelios

    2004-01-01

    Measurement error is a usually met distortion factor in real-world applications that influences the outcome of a process. In this paper, we examine the effect of measurement error on the ability of the EWMA control chart to detect out-of-control situations. The model used is the one involving linear covariates. We investigate the ability of the EWMA chart in the case of a shift in mean. The effect of taking multiple measurements on each sampled unit and the case of linearly increasing varianc...

  11. Measuring verification device error rates

    International Nuclear Information System (INIS)

    A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix

  12. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  13. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  14. Better Stability with Measurement Errors

    Science.gov (United States)

    Argun, Aykut; Volpe, Giovanni

    2016-06-01

    Often it is desirable to stabilize a system around an optimal state. This can be effectively accomplished using feedback control, where the system deviation from the desired state is measured in order to determine the magnitude of the restoring force to be applied. Contrary to conventional wisdom, i.e. that a more precise measurement is expected to improve the system stability, here we demonstrate that a certain degree of measurement error can improve the system stability. We exemplify the implications of this finding with numerical examples drawn from various fields, such as the operation of a temperature controller, the confinement of a microscopic particle, the localization of a target by a microswimmer, and the control of a population.

  15. POTASSIUM MEASUREMENT: CAUSES OF ERRORS IN MEASUREMENT

    OpenAIRE

    Kavitha; Omprakash

    2014-01-01

    It is not a easy task to recognize the errors in potassium measurement in the lab. Falsely elevated potassium levels if goes unrecognized by the lab and clinician, it is difficult to treat masked hypokalemic state, which is again a medical emergency. Such cases require proper monitoring by the clinician, so that cases with such history of pseudohyperkalemia which cannot be easily identified in the laboratory should not go unrecognized by clinician. The aim of this article is t...

  16. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  17. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  18. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  19. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  20. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  1. Generalized bivariate Fibonacci polynomials

    OpenAIRE

    Catalani, Mario

    2002-01-01

    We define generalized bivariate polynomials, from which upon specification of initial conditions the bivariate Fibonacci and Lucas polynomials are obtained. Using essentially a matrix approach we derive identities and inequalities that in most cases generalize known results.

  2. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  3. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    measurement errors is symmetric and the distribution of the underlying true income is skewed then there are valid technical instruments. We investigate how this IV estimation approach works in theory and illustrate it by simulation studies using the findings about the measurement error model for income from......Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... the classical measurement error model (for the logarithm to income) is valid except in the tails of the income distribution where those with low (high) income tends to over (under) report. In addition we find that the marginal distribution of the measurement errors is symmetric and leptokurtic and...

  4. Bivariate discrete Linnik distribution

    Directory of Open Access Journals (Sweden)

    Davis Antony Mundassery

    2014-10-01

    Full Text Available Christoph and Schreiber (1998a studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.

  5. Bivariate discrete Linnik distribution

    OpenAIRE

    Davis Antony Mundassery; Jayakumar, K.

    2014-01-01

    Christoph and Schreiber (1998a) studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.

  6. Measuring Cyclic Error in Laser Heterodyne Interferometers

    Science.gov (United States)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  7. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  8. Methodological errors in radioisotope flux measurements

    Energy Technology Data Exchange (ETDEWEB)

    Egnor, R.W.; Vaccarezza, S.G.; Charney, A.N. (New York Univ. School of Medicine, New York (USA))

    1988-11-01

    The authors examined several sources of error in isotopic flux measurements in a commonly used experimental model: the study of {sup 22}Na and {sup 36}Cl fluxes across rat ileal tissue mounted in the Ussing flux chamber. The experiment revealed three important sources of error: the absolute counts per minute, the difference in counts per minute between serial samples, and averaging of serial samples. By computer manipulation, they then applied hypothetical changes in the experimental protocol to generalize these findings and assess the effect and interaction of the absolute counts per minute, the sampling interval, and the counting time on the magnitude of the error. They found that the error of a flux measurement will vary inversely with the counting time and the difference between the consecutive sample counts per minute used in the flux calculations and will vary directly with the absolute counts per minute of each sample. Alteration of the hot side specific activity, the surface area of the tissue across which flux is measured and the sample volume have a smaller impact on measurement error. Experimental protocols should be designed with these methodological considerations in mind to minimize the error inherent in measuring isotope flux.

  9. Measurement error analysis of taxi meter

    Science.gov (United States)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  10. Statistical error analysis of reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Thammaluckan, Sithisak; Hah, Chang Joo [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    After statistical analysis, it was confirmed that each group were sampled from same population. It is observed in Table 7 that the mean error decreases as core size increases. Application of bias factor obtained from this research reduces mean error further. The point kinetic model had been used to measure control rod worth without 3D spatial information of neutron flux or power distribution, which causes inaccurate result. Dynamic Control rod Reactivity Measurement (DCRM) was employed to take into account of 3D spatial information of flux in the point kinetics model. The measured bank worth probably contains some uncertainty such as methodology uncertainty and measurement uncertainty. Those uncertainties may varies with size of core and magnitude of reactivity. The goal of this research is to investigate the effect of core size and magnitude of control rod worth on the error of reactivity measurement using statistics.

  11. Systematic Errors in Black Hole Mass Measurements

    Science.gov (United States)

    McConnell, Nicholas J.

    2014-01-01

    Compilations of stellar- and gas-dynamical measurements of supermassive black holes are often assembled without quantifying systematic errors from various assumptions in the dynamical modeling processes. Using a simple Monte-Carlo approach, I will discuss the level to which different systematic effects could bias scaling relations between black holes and their host galaxies. Given that systematic errors will not be eradicated in the near future, how wrong can we afford to be?

  12. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species. This...... instrumental measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction...... is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x-measurements...

  13. Bivariate analysis of basal serum anti-Mullerian hormone measurements and human blastocyst development after IVF

    LENUS (Irish Health Repository)

    Sills, E Scott

    2011-12-02

    Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  14. Bivariate Uniform Deconvolution

    OpenAIRE

    Benešová, Martina; van Es, Bert; Tegelaar, Peter

    2011-01-01

    We construct a density estimator in the bivariate uniform deconvolution model. For this model we derive four inversion formulas to express the bivariate density that we want to estimate in terms of the bivariate density of the observations. By substituting a kernel density estimator of the density of the observations we then get four different estimators. Next we construct an asymptotically optimal convex combination of these four estimators. Expansions for the bias, variance, as well as asym...

  15. Measurement Error in Access to Markets

    OpenAIRE

    Javier Escobal; Sonia Laszlo

    2005-01-01

    Studies in the microeconometric literature increasingly utilize distance to or time to reach markets or social services as determinants of economic issues. These studies typically use self-reported measures from survey data, often characterized by non-classical measurement error. This paper is the first validation study of access to markets data. New and unique data from Peru allow comparison of self-reported variables with scientifically calculated variables. We investigate the determinants ...

  16. Multiple Indicators, Multiple Causes Measurement Error Models

    OpenAIRE

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow bot...

  17. Measurement Uncertainty Evaluation of Digital Modulation Quality Parameters: Magnitude Error and Phase Error

    Directory of Open Access Journals (Sweden)

    Zhan Zhiqiang

    2016-01-01

    Full Text Available In digital modulation quality parameters traceability, the Error Vector Magnitude, Magnitude Error and Phase Error must be traced, and the measurement uncertainty of the above parameters needs to be assessed. Although the calibration specification JJF1128-2004 Calibration Specification for Vector Signal Analyzers is published domestically, the measurement uncertainty evaluation is unreasonable, the parameters selected is incorrect, and not all error terms are selected in measurement uncertainty evaluation. This article lists formula about magnitude error and phase error, than presents the measurement uncertainty evaluation processes for magnitude error and phase errors.

  18. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  19. The Bivariate Normal Copula

    OpenAIRE

    Christian Meyer

    2009-01-01

    We collect well known and less known facts about the bivariate normal distribution and translate them into copula language. In addition, we prove a very general formula for the bivariate normal copula, we compute Gini's gamma, and we provide improved bounds and approximations on the diagonal.

  20. Morgenstern type bivariate Lindley Distribution

    Directory of Open Access Journals (Sweden)

    V S Vaidyanathan

    2016-06-01

    Full Text Available In this paper, a bivariate Lindley distribution using Morgenstern approach is proposed which can be used for modeling bivariate life time data. Some characteristics of the distribution like moment generating function, joint moments, Pearson correlation coefficient, survival function, hazard rate function, mean residual life function, vitality function and stress-strength parameter R=Pr(Yerror and relative absolute bias.

  1. REGIONAL DISTRIBUTION OF MEASUREMENT ERROR IN DTI

    OpenAIRE

    Marenco, Stefano; Rawlings, Robert; Rohde, Gustavo K.; Barnett, Alan S.; Honea, Robyn A.; Pierpaoli, Carlo; Weinberger, Daniel R.

    2006-01-01

    The characterization of measurement error is critical in assessing the significance of diffusion tensor imaging (DTI) findings in longitudinal and cohort studies of psychiatric disorders. We studied 20 healthy volunteers each one scanned twice (average interval between scans of 51 ± 46.8 days) with a single shot echo planar DTI technique. Inter-session variability for fractional anisotropy (FA) and Trace (D) was represented as absolute variation (standard deviation within subjects: SDw), perc...

  2. Bivariate extreme value distributions

    Science.gov (United States)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  3. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  4. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2016-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  5. Ordinal bivariate inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median...

  6. Bivariate value-at-risk

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2007-10-01

    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  7. On positive bivariate quartic forms

    OpenAIRE

    Sharipov, Ruslan

    2015-01-01

    A bivariate quartic form is a homogeneous bivariate polynomial of degree four. A criterion of positivity for such a form is known. In the present paper this criterion is reformulated in terms of pseudotensorial invariants of the form.

  8. Error analysis and data reduction for interferometric surface measurements

    Science.gov (United States)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  9. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    Science.gov (United States)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  10. Ordinal Bivariate Inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  11. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  12. Median Unbiased Estimation of Bivariate Predictive Regression Models with Heavy-tailed or Heteroscedastic Errors%具有重尾或异方差误差的双变量预测回归模型的中位无偏估计

    Institute of Scientific and Technical Information of China (English)

    朱复康; 王德军

    2007-01-01

    In this paper, we consider median unbiased estimation of bivariate predictive regression models with non-normal, heavy-tailed or heterescedastic errors. We construct confidence intervals and median unbiased estimator for the parameter of interest. We show that the proposed estimator has better predictive potential than the usual least squares estimator via simulation. An empirical application to finance is given. And a possible extension of the estimation procedure to cointegration models is also described.

  13. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  14. Quantum Estimation Theory of Error and Disturbance in Quantum Measurement

    OpenAIRE

    Watanabe, Yu; Ueda, Masahito

    2011-01-01

    We formulate the error and disturbance in quantum measurement by invoking quantum estimation theory. The disturbance formulated here characterizes the non-unitary state change caused by the measurement. We prove that the product of the error and disturbance is bounded from below by the commutator of the observables. We also find the attainable bound of the product.

  15. A measurement error model for microarray data analysis

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yiming; CHENG Jing

    2005-01-01

    Microarray technology has been widely used to analyze the gene expression levels by detecting fluorescence intensity in a high throughput fashion. However, since the measurement error produced from various sources in microarray experiments is heterogeneous and too large to be ignored, we propose here a measurement error model for microarray data processing, by which the standard deviation of the measurement error is demonstrated to be linearly increased with fluorescence intensity. A robust algorithm, which estimates the parameters of the measurement error model from a single microarray without replicated spots, is provided. The model and algorithm for estimating of the parameters from a given data set are tested on both the real data set and the simulated data set, and the result has been proven satisfactory. And, combining the measurement error model with traditional Z-test method, a full statistical model has been developed. It can significantly improve the statistical inference for identifying differentially expressed genes.

  16. Bivariate Exponentiated Modified Weibull Extension

    OpenAIRE

    El-Gohary, A.; El-Morshedy, M.

    2015-01-01

    In this paper, we introduce a new bivariate distribution we called it bivariate expo- nentiated modified Weibull extension distribution (BEMWE). The model introduced here is of Marshall-Olkin type. The marginals of the new bivariate distribution have exponentiated modified Weibull extension distribution which proposed by Sarhan et al.(2013). The joint probability density function and the joint cumulative distribu- tion function are in closed forms. Several properties of this distribution have...

  17. The error analysis and online measurement of linear slide motion error in machine tools

    Science.gov (United States)

    Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.

    2002-06-01

    A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.

  18. Measurement error caused by spatial misalignment in environmental epidemiology

    OpenAIRE

    Gryparis, A; Paciorek, CJ; Zeka, A; Schwartz, J; Coull, BA

    2008-01-01

    In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement ...

  19. Improvement of method of estimating the measurement errors

    International Nuclear Information System (INIS)

    NMCC estimates random and systematic measurement errors based on the operator's and inspector's data to evaluate operator's measurement performance as one of our work. We use these estimated measurement errors to other works including evaluation of MUF (material unaccounted for), significant test of operator-inspector difference and so on. Therefore, accurate estimation of measurement errors is important in terms of evaluating operator's declared data. The method always provides positive error variances and shows probability density functions of error variances while current method provides error variances as point estimators and they sometimes are negative values. The method allows operators to evaluate measurement errors by using their own data set for the comparison with International Target Values (ITVs). We tested the performance of the method by simulation for the purpose of selecting a best estimate to use in the evaluation of operator's measurement errors from values provided by the probability density function such as median, expected value and checking if a provided confidential interval was valid. In addition, we tested the practical performance of the method by confirming the consistence with current method. (author)

  20. ERROR COMPENSATION OF COORDINATE MEASURING MACHINES WITH LOW STIFFNESS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A technique for compensating the errors of coordinate measuring machines (CMMs) with low stiffness is proposed. Some additional it ems related with the force deformation are introduced to the error compensation equations. The research was carried on a moving column horizontal arm CMM. Experimental results show that both the effects of systematic components of error motions and force deformations are greatly reduced, which shows the effectiveness o proposed technique.

  1. Sampling errors in rainfall measurements by weather radar

    OpenAIRE

    Piccolo, F.; G. B. Chirico

    2005-01-01

    International audience Radar rainfall data are affected by several types of error. Beside the error in the measurement of the rainfall reflectivity and its transformation into rainfall intensity, random errors can be generated by the temporal spacing of the radar scans. The aim of this work is to analize the sensitivity of the estimated rainfall maps to the radar sampling interval, i.e. the time interval between two consecutive radar scans. This analysis has been performed employing data c...

  2. Testing for a Single-Factor Stochastic Volatility in Bivariate Series

    Directory of Open Access Journals (Sweden)

    Masaru Chiba

    2013-12-01

    Full Text Available This paper proposes the Lagrange multiplier test for the null hypothesis thatthe bivariate time series has only a single common stochastic volatility factor and noidiosyncratic volatility factor. The test statistic is derived by representing the model in alinear state-space form under the assumption that the log of squared measurement error isnormally distributed. The empirical size and power of the test are examined in Monte Carloexperiments. We apply the test to the Asian stock market indices.

  3. A Bayesian semiparametric model for bivariate sparse longitudinal data.

    Science.gov (United States)

    Das, Kiranmoy; Li, Runze; Sengupta, Subhajit; Wu, Rongling

    2013-09-30

    Mixed-effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model-based approach and estimate the error-covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random-effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. PMID:23553747

  4. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  5. Method for online measurement of optical current transformer onsite errors

    International Nuclear Information System (INIS)

    This paper describes a method for the online measurement of an optical current transformer (OCT) onsite errors comparing with a conventional electromagnetic current transformer (CT) as the reference transformer. The OCT under measurement is connected in series with the reference electromagnetic CT in the same line bay. The secondary output signals of the OCT and the electromagnetic CT are simultaneously collected and processed using a digital signal processing technique. The tests developed on a prototype clearly indicate that the method is very suitable for measuring errors of the OCT onsite without an interruption in the service. The onsite error characteristics of the OCT are analyzed, as well as the stability and repeatability. (paper)

  6. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  7. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  8. ON GENERALIZED SARMANOV BIVARIATE DISTRIBUTIONS

    OpenAIRE

    , G. Jay Kerns

    2011-01-01

    A class of bivariate distributions which generalizes the Sarmanov class is introduced. This class possesses a simple analytical form and desirable dependence properties. The admissible range for association parameter for given bivariate distributions are derived and the range for correlation coefficients are also presented.

  9. Application of Peano Kernels Theorem to Bivariate Product Cubature

    International Nuclear Information System (INIS)

    We fully utilize Peano kernels theorem to the error estimates of bivariate self-validating integration based on product cubature rules. This application enable adaptive local error estimates. We demonstrate the characteristics and effectiveness of our methods by comparing it with a conventional integrator

  10. Reduction of statistic error in Mihalczo subcriticality measurement

    Energy Technology Data Exchange (ETDEWEB)

    Hazama, Taira [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-08-01

    The theoretical formula for the statistical error estimation in Mihalczo method was derived, and the dependence of the error were investigated on the facility to be measured and on the parameter in the data analysis. The formula was derived based on the reactor noise theory and the error theory for the frequency analysis, and found that the error depends on such parameters as the prompt neutron decay constant, detector efficiencies, and the frequency bandwidth. Statistical errors estimated with the formula was compared with experimental values and verified to be reasonable. Through parameter surveys, it is found that there is an optimum combination of the parameters to reduce the magnitude of the errors. In the experiment performed in DCA subcriticality measurement facility, it is estimated experimentally that the measurement requires 20 minutes to obtain the statistic error of 1% for the keff 0.9. According to the error theory, this might be reduced to 3 seconds in the aqueous fuel system typical in fuel reprocessing plant. (J.P.N.)

  11. ASSESSING THE DYNAMIC ERRORS OF COORDINATE MEASURING MACHINES

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The main factors affecting the dynamic errors of coordinate measuring machines are analyzed. It is pointed out that there are two main contributors to the dynamic errors: One is the rotation of the elements around the joints connected with air bearings and the other is the bending of the elements caused by the dynamic inertial forces. A method for obtaining the displacement errors at the probe position from dynamic rotational errors is presented. The dynamic rotational errors are measured with inductive position sensors and a laser interferometer. The theoretical and experimental results both show that during the process of fast probing, due to the dynamic inertial forces, there are not only large rotation of the elements around the joints connected with air bearings but also large bending of the weak elements themselves.

  12. Error tolerance of topological codes with independent bit-flip and measurement errors

    Science.gov (United States)

    Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.

    2016-07-01

    Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.

  13. An introduction to the measurement errors and data handling

    International Nuclear Information System (INIS)

    Some usual methods to estimate and correlate measurement errors are presented. An introduction to the theory of parameter determination and goodness of the estimates is also presented. Some examples are discussed. (author)

  14. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  15. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  16. Income convergence in South Africa: Fact or measurement error?

    OpenAIRE

    Lechtenfeld, Tobias; Zoch, Asmus

    2014-01-01

    This paper asks whether income mobility in South Africa over the last decade has indeed been as impressive as currently thought. Using new national panel data (NIDS), substantial measurement error in reported income data is found, which is further corroborated by a provincial income data panel (KIDS). By employing an instrumental variables approach using two different instruments, measurement error can be quantified. Specifically, self-reported income in the survey data is shown to suffer fro...

  17. Sample size and power calculations for correlations between bivariate longitudinal data

    OpenAIRE

    Comulada, W. Scott; Weiss, Robert E.

    2010-01-01

    The analysis of a baseline predictor with a longitudinally measured outcome is well established and sample size calculations are reasonably well understood. Analysis of bivariate longitudinally measured outcomes is gaining in popularity and methods to address design issues are required. The focus in a random effects model for bivariate longitudinal outcomes is on the correlations that arise between the random effects and between the bivariate residuals. In the bivariate random effects model, ...

  18. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  19. Measurement error caused by spatial misalignment in environmental epidemiology

    Science.gov (United States)

    Gryparis, Alexandros; Paciorek, Christopher J.; Zeka, Ariana; Schwartz, Joel; Coull, Brent A.

    2009-01-01

    In many environmental epidemiology studies, the locations and/or times of exposure measurements and health assessments do not match. In such settings, health effects analyses often use the predictions from an exposure model as a covariate in a regression model. Such exposure predictions contain some measurement error as the predicted values do not equal the true exposures. We provide a framework for spatial measurement error modeling, showing that smoothing induces a Berkson-type measurement error with nondiagonal error structure. From this viewpoint, we review the existing approaches to estimation in a linear regression health model, including direct use of the spatial predictions and exposure simulation, and explore some modified approaches, including Bayesian models and out-of-sample regression calibration, motivated by measurement error principles. We then extend this work to the generalized linear model framework for health outcomes. Based on analytical considerations and simulation results, we compare the performance of all these approaches under several spatial models for exposure. Our comparisons underscore several important points. First, exposure simulation can perform very poorly under certain realistic scenarios. Second, the relative performance of the different methods depends on the nature of the underlying exposure surface. Third, traditional measurement error concepts can help to explain the relative practical performance of the different methods. We apply the methods to data on the association between levels of particulate matter and birth weight in the greater Boston area. PMID:18927119

  20. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  1. Measurement uncertainty evaluation of conicity error inspected on CMM

    Science.gov (United States)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  2. Errors in Measurement of Microwave Interferograms Using Antenna Matrix

    OpenAIRE

    P. Hudec; Hoffmann, K; Zela, J.

    2008-01-01

    New antenna matrices for both scalar and vector measurement of microwave interferograms for the frequency 2.45 GHz were developed and used for an analysis of sources of measurement errors. Influence of mutual coupling between individual antennas in an antenna matrix on a measurement of microwave interferograms, particularly on a measurement of interferogram minimum values, was studied. Simulations and measurements of interferograms, proposal of a new calibration procedure and correction metho...

  3. Persistent Leverage in Portfolio Sorts: An Artifact of Measurement Error?

    OpenAIRE

    Mueller, Michael

    2014-01-01

    Studies such as Lemmon, Roberts and Zender (2008) demonstrate how stable firms' capital structures are over time, and raise the question of whether new theories of capital structure are needed to explain these phenomena. In this paper, I show that trade-off theory-based empirical proxies that are observed with error offer an alternative explanation for the persistence in portfolio-leverage levels. Measurement error noise equal to 80% of the cross-sectional variation in the market to book rati...

  4. Errors

    International Nuclear Information System (INIS)

    Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)

  5. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  6. Measurement error of waist circumference: gaps in knowledge.

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; Mechelen, W. van

    2013-01-01

    Objective: It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design: To ident

  7. Measurement error of waist circumference: Gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To identif

  8. ALGORITHM FOR SPHERICITY ERROR AND THE NUMBER OF MEASURED POINTS

    Institute of Scientific and Technical Information of China (English)

    HE Gaiyun; WANG Taiyong; ZHAO Jian; YU Baoqin; LI Guoqin

    2006-01-01

    The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution,whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.

  9. Morgenstern type bivariate Lindley Distribution

    OpenAIRE

    V S Vaidyanathan; Sharon Varghese, A

    2016-01-01

    In this paper, a bivariate Lindley distribution using Morgenstern approach is proposed which can be used for modeling bivariate life time data. Some characteristics of the distribution like moment generating function, joint moments, Pearson correlation coefficient, survival function, hazard rate function, mean residual life function, vitality function and stress-strength parameter R=Pr(Y

  10. Analysis of Bivariate Extreme Values

    OpenAIRE

    Egeland Busuttil, Chris

    2015-01-01

    Results show that there is high agreement between the distribution of the bivariate ACER functions and the distribution of the copula models with ACER marginals for all time series. The distribution of the copula models with Gumbel marginals display great discrepancies to the distribution of the bivariate ACER functions. These disagreements are greatest for short time series, and decrease as the time series become longer.

  11. Systematic errors in VVER-440 coolant temperature measurement

    International Nuclear Information System (INIS)

    Stable operation of current nuclear power stations requires on-line temperature monitoring within the reactor. Experience with VVER power reactors suggests that a necessary condition for safe operation of a station containing VVER-440 is that the coolant temperature should be monitored at the outlet from the fuel-pin assemblies and in the main circulation loop. It is possible to reduce the error in the reactor temperature measurements to determine the heat production nonuniformity coefficients over the core more accurately, as well as the underheating of the coolant relative to the saturation temperature at the exit from a fuel-pin assembly, together with other thermophysical parameters important for safe and effective power station operation. Measurements within reactors may be accompanied by systematic deviations in the thermocouple readings that are comparable in magnitude with the limiting permissible errors. This paper discusses the most important components of the systematic deviations: errors due to calibration drift during use, errors due to radiation heating, and dynamic measurement error. The authors consider the basic features in the method of determining and balancing out the first of these components. 17 refs., 3 tabs

  12. Earnings mobility and measurement error : a pseudo-panel approach

    OpenAIRE

    Antman, Francisca; McKenzie, David J.

    2005-01-01

    The degree of mobility in incomes is often seen as an important measure of the equality of opportunity in a society and of the flexibility and freedom of its labor market. However, estimation of mobility using panel data is biased by the presence of measurement error and nonrandom attrition from the panel. This study shows that dynamic pseudo-panel methods can be used to consistently estimate measures of absolute and conditional mobility when genuine panels are not available and in the presen...

  13. Optimal measurement strategies for effective suppression of drift errors

    International Nuclear Information System (INIS)

    Drifting of experimental setups with change in temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental setups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.

  14. Estimation of discretization errors in contact pressure measurements.

    Science.gov (United States)

    Fregly, Benjamin J; Sawyer, W Gregory

    2003-04-01

    Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352

  15. The effect of measurement error on surveillance metrics

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Brian Phillip [Los Alamos National Laboratory; Hamada, Michael S. [Los Alamos National Laboratory

    2012-04-24

    The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.

  16. Dyadic Bivariate Wavelet Multipliers in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhong Yan LI; Xian Liang SHI

    2011-01-01

    The single 2 dilation wavelet multipliers in one-dimensional case and single A-dilation (where A is any expansive matrix with integer entries and |detA|=2)wavelet multipliers in twodimensional case were completely characterized by Wutam Consortium(1998)and Li Z.,et al.(2010).But there exist no results on multivariate wavelet multipliers corresponding to integer expansive dilation.matrix with the absolute value of determinant not 2 in L2(R2).In this paper,we choose 2I2=(0202)as the dilation matrix and consider the 2I2-dilation multivariate wavelet Ψ={ψ1,ψ2,ψ3}(which is called a dyadic bivariate wavelet)multipliers.Here we call a measurable function family f={f1,f2,f3}a dyadic bivariate wavelet multiplier if Ψ1={F-1(f1ψ1),F-1(f2ψ2),F-1(f3ψ3)} is a dyadic bivariate wavelet for any dyadic bivariate wavelet Ψ={ψ1,ψ2,ψ3},where(f)and,F-1 denote the Fourier transform and the inverse transform of function f respectively.We study dyadic bivariate wavelet multipliers,and give some conditions for dyadic bivariate wavelet multipliers.We also give concrete forms of linear phases of dyadic MRA bivariate wavelets.

  17. Bayesian conformity assessment in presence of systematic measurement errors

    Science.gov (United States)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  18. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  19. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow.......This article concerns satellite interferometric radar measurements of ice elevation and three-dimensional flow vectors. It describes sensitivity to (1) atmospheric path length changes, and other phase distortions, (2) violations of the stationary flow assumption, and (3) unknown vertical velocities...

  20. Effects of measurement errors on microwave antenna holography

    Science.gov (United States)

    Rochblatt, David J.; Rahmat-Samii, Yahya

    1991-01-01

    The effects of measurement errors appearing during the implementation of the microwave holographic technique are investigated in detail, and many representative results are presented based on computer simulations. The numerical results are tailored for cases applicable to the utilization of the holographic technique for the NASA's Deep Space Network antennas, although the methodology of analysis is applicable to any antenna. Many system measurement topics are presented and summarized.

  1. Estimation of coherent error sources from stabilizer measurements

    Science.gov (United States)

    Orsucci, Davide; Tiersch, Markus; Briegel, Hans J.

    2016-04-01

    In the context of measurement-based quantum computation a way of maintaining the coherence of a graph state is to measure its stabilizer operators. Aside from performing quantum error correction, it is possible to exploit the information gained from these measurements to characterize and then counteract a coherent source of errors; that is, to determine all the parameters of an error channel that applies a fixed—but unknown—unitary operation to the physical qubits. Such a channel is generated, e.g., by local stray fields that act on the qubits. We study the case in which each qubit of a given graph state may see a different error channel and we focus on channels given by a rotation on the Bloch sphere around either the x ̂, the y ̂, or the z ̂ axis, for which analytical results can be given in a compact form. The possibility of reconstructing the channels at all qubits depends nontrivially on the topology of the graph state. We prove via perturbation methods that the reconstruction process is robust and supplement the analytic results with numerical evidence.

  2. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  3. Reducing systematic errors in measurements made by a SQUID magnetometer

    Energy Technology Data Exchange (ETDEWEB)

    Kiss, L.F., E-mail: kissl@szfki.hu; Kaptás, D.; Balogh, J.

    2014-11-15

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co{sub 1.9}Fe{sub 1.1}Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly.

  4. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serious...... impact on the results when the instrumental measurements are used for multivariate regression and prediction. This paper gives examples of how errors influencing the predictions obtained by a multivariate regression model can be quantified and handled. Only random errors are considered here, while in...... certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  5. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  6. Improving GDP measurement: a measurement-error perspective

    OpenAIRE

    S. Boragan Aruoba; Francis X. Diebold; Jeremy J. Nalewaik; Frank Schorfheide; Dongho Song

    2013-01-01

    We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties -- particularly as regards serial correlation -- differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP.

  7. Reducing systematic errors in measurements made by a SQUID magnetometer

    Science.gov (United States)

    Kiss, L. F.; Kaptás, D.; Balogh, J.

    2014-11-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.

  8. Profit Maximization, Returns to Scale, and Measurement Error.

    OpenAIRE

    Lim, Hongil; Shumway, C. Richard

    1992-01-01

    A nonparametric analysis of agricultural production behavior was conducted for each of the contiguous forty-eight states for the period 1956-82 under the joint hypothesis of profit maximization, convex technology, and nonregressive technical change. Tests were conducted in each state for profit maximization and for constant returns to scale. Although considerable variability was observed among states, measurement errors of magnitudes common in secondary data yielded test results fully consist...

  9. Correcting for measurement error in latent variables used as predictors

    OpenAIRE

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory, is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I fi...

  10. Confounding and exposure measurement error in air pollution epidemiology

    OpenAIRE

    2011-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital statu...

  11. Confounding and exposure measurement error in air pollution epidemiology

    OpenAIRE

    Sheppard, L.; Burnett, R T; Szpiro, A.A.; Kim, J.Y.; Jerrett, M; Pope, C; Brunekreef, B

    2012-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital statu...

  12. The error budget of the Dark Flow measurement

    OpenAIRE

    Atrio-Barandela, F.; Kashlinsky, A.; Ebeling, H.; Kocevski, D.; Edge, A.

    2010-01-01

    We analyze the uncertainties and possible systematics associated with the "Dark Flow" measurements using the cumulative Sunyaev-Zeldovich (SZ) effect combined with all-sky catalogs of clusters of galaxies. Filtering of all-sky cosmic microwave background maps is required to remove the intrinsic cosmological signal down to the limit imposed by cosmic variance. Contributions to the errors come from the remaining cosmological signal, which integrates down with the number of clusters, and the ins...

  13. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    Science.gov (United States)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  14. The effect of posture on errors in gastric emptying measurements

    International Nuclear Information System (INIS)

    Scintigraphic gastric emptying measurements were made with subjects supine and upright using a dual-detector rectilinear scanner. Previously reported variations of the depth of activity during the course of a study were again found with both postures. Although there was no significant mean depth change in the group when upright, some individual variations were substantial. Measurements with a gamma camera demonstrated similar changes of depth of stomach contents with seated subjects. The resulting variations of attenuation of the emergent radiation leads to appreciable errors in the emptying rates determined by unilateral detection. In about half the cases the mean movement of a 99Tcsup(m)-labelled solid phase marker exceeded 1 cm; such a movement led to an average 20% error in emptying rate determination by an anterior detector. Depth changes of a liquid marker were less marked, exceeding 0.5 cm in half the subjects; this movement gave rise to an average 6% error when 113Insup(m) was used as the tracer. (author)

  15. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  16. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  17. Microelectromechnical Systems Inertial Measurement Unit Error Modelling and Error Analysis for Low-cost Strapdown Inertial Navigation System

    OpenAIRE

    Ramalingam, R.; G. Anitha; J. Shanmugam

    2009-01-01

    This paper presents error modelling and error analysis of microelectromechnical systems (MEMS) inertial measurement unit (IMU) for a low-cost strapdown inertial navigation system (INS). The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has b...

  18. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole;

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure for...

  19. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    Directory of Open Access Journals (Sweden)

    R. M. Stauffer

    2013-08-01

    Full Text Available Several previous studies highlight pressure (or equivalently, pressure altitude discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006–2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92 are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3 mixing ratio profiles (O3MR from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km. Ozone mixing ratio errors in the 7–15 hPa layer (29–32 km, a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold. Comparisons of total column O3 yield average differences of +1.6 DU (−1.1 to +4.9 DU 10th to 90th percentiles when the O3 is integrated to burst with addition of the McPeters and Labow (2012 above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (−1.1 to +2.2 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly

  20. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  1. The bivariate current status model

    OpenAIRE

    Groeneboom, P.

    2013-01-01

    For the univariate current status and, more generally, the interval censoring model, distribution theory has been developed for the maximum likelihood estimator (MLE) and smoothed maximum likelihood estimator (SMLE) of the unknown distribution function, see, e.g., [12], [7], [4], [5], [6], [10], [11] and [8]. For the bivariate current status and interval censoring models distribution theory of this type is still absent and even the rate at which we can expect reasonable estimators to converge...

  2. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.; Carmignato, S.; De Chiffre, Leonardo

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  3. Quantitative texton sequences for legible bivariate maps.

    Science.gov (United States)

    Ware, Colin

    2009-01-01

    Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs. PMID:19834229

  4. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  5. Characterization of measurement error sources in Doppler global velocimetry

    Science.gov (United States)

    Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.

    2001-04-01

    Doppler global velocimetry uses the absorption characteristics of iodine vapour to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development programme that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapour calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m s-1. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.

  6. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    OpenAIRE

    Zhao, Shanshan; Prentice, Ross L.

    2014-01-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator ...

  7. On the Measurement of Privacy as an Attacker's Estimation Error

    CERN Document Server

    Rebollo-Monedero, David; Diaz, Claudia; Forné, Jordi

    2011-01-01

    A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundame...

  8. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  9. Bivariate control chart with copula

    Science.gov (United States)

    Lestari, Tika; Syuhada, Khreshna; Mukhaiyar, Utriweni

    2015-12-01

    Control chart is the main and powerful tool in statistical process control in order to detect and classify data, either in control or out of control. Its concept, basically, refers to the theory of prediction interval. Accordingly, in this paper, we aim at constructing of what so called predictive bivariate control charts, both classical and Copula-based ones. We argue that appropriate joint distribution function may be well estimated by employing Copula. A numerical analysis is carried out to illustrate that a Copula-based control chart outperforms than other.

  10. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  11. Statistical Methods to Adjust for Measurement Error in Risk Prediction Models and Observational Studies

    OpenAIRE

    Braun, Danielle

    2013-01-01

    The first part of this dissertation focuses on methods to adjust for measurement error in risk prediction models. In chapter one, we propose a nonparametric adjustment for measurement error in time to event data. Measurement error in time to event data used as a predictor will lead to inaccurate predictions. This arises in the context of self-reported family history, a time to event covariate often measured with error, used in Mendelian risk prediction models. Using validation data, we propos...

  12. Cognitive error in the measurement of investment returns

    OpenAIRE

    Hayley, S.

    2015-01-01

    This thesis identifies and quantifies the impact of cognitive errors in certain aspects of investor decision-making. One error is that investors are unaware that the Internal Rate of Return (IRR) is a biased indicator of expected terminal wealth for any dynamic strategy where the amount invested is systematically related to the returns made to date. This error leads investors to use Value Averaging (VA). This thesis demonstrates that this is an inefficient strategy, since alternative strategi...

  13. A Simulation Analysis of Bivariate Availability Models

    OpenAIRE

    Caruso, Elise M.

    2000-01-01

    Equipment behavior is often discussed in terms of age and use. For example, an automobile is frequently referred to 3 years old with 30,000 miles. Bivariate failure modeling provides a framework for studying system behavior as a function of two variables. This is meaningful when studying the reliability/availability of systems and equipment. This thesis extends work done in the area of bivariate failure modeling. Four bivariate failure models are selected for analysis. The study in...

  14. Some R graphics for bivariate distributions

    OpenAIRE

    Klein, Ingo

    2008-01-01

    There is no package in R to plot bivariate distributions for discrete variables or variables given by classes. Therefore, with the help of the already implemented R routine persp R functions will be proposed for 3-D plots of the bivariate distribution of discrete variables, the so-called stereogram that generalizes the well-known histogram for cross-classified data and the approximative bivariate distribution function for cross-classified data.

  15. Nonlinear analysis of bivariate data with cross recurrence plots

    CERN Document Server

    Marwan, N

    2002-01-01

    We extend the method of recurrence plots to cross recurrence plots (CRP) which enables a nonlinear analysis of bivariate data. To quantify CRPs, we introduce three measures of complexity mainly basing on diagonal structures in CRPs. The CRP analysis of prototypical model systems with nonlinear interactions demonstrates that this technique enables to find these nonlinear interelations from bivariate time series, whereas linear correlation tests do not. Applying the CRP analysis to climatological data, we find a complex relationship between rainfall and El Nino data.

  16. The consequences of measurement error when estimating the impact of obesity on income

    OpenAIRE

    O'Neill, Donal; Sweetman, Olive

    2013-01-01

    This paper examines the consequences of using self-reported measures of BMI when estimating the effect of BMI on income for women using both Irish and US data. We find that self-reported BMI is subject to substantial measurement error and that this error deviates from classical measurement error. These errors cause the traditional least squares estimator to overestimate the relationship between BMI and income. We show that neither the conditional expectation estimator nor the instrumental var...

  17. Comparative temperature measurement errors in high thermal gradient fields

    International Nuclear Information System (INIS)

    Accurate measurement of temperature in tumor and surrounding host tissue remains one of the major difficulties in clinical hyperthermia. The need for nonperturbable probes that can operate in electromagnetic and ultrasonic fields has been well established. Less attention has been given to the need for nonperturbing probes-temperature probes that do not alter the thermal environments they are sensing. This is important in situations where the probe traverses relatively high temperature gradients such as those resulting from significant differentials in local SAR, blood flow, and thermal properties. Errors are reduced when the thermal properties of the probe and tumor tissue are matched. The ideal transducer would also have low thermal mass and microwave and/or ultrasonic absorption characteristics matched to tissue. Perturbations induced in the temperature gradient field by virtue of axial conduction along the probe shaft were compared for several of the available multisensor temperature probes as well as several prototype multisensor temperature transducers. Well calibrated thermal gradients ranging from 0 to 100C/cm were produced with a stability of 2 millidegrees per minute. Probes compared were: the three sensor YSI thermocouple probe, 14 sensor thermistor needle probe, 10 sensor ion-implanted silicon substrate resistance probe, and the multisensor resistance probe, and the multisensor resistance probe fabricated using microelectronic techniques

  18. Large-scale spatial angle measurement and the pointing error analysis

    Science.gov (United States)

    Xiao, Wen-jian; Chen, Zhi-bin; Ma, Dong-xi; Zhang, Yong; Liu, Xian-hong; Qin, Meng-ze

    2016-05-01

    A large-scale spatial angle measurement method is proposed based on inertial reference. Common measurement reference is established in inertial space, and the spatial vector coordinates of each measured axis in inertial space are measured by using autocollimation tracking and inertial measurement technology. According to the spatial coordinates of each test vector axis, the measurement of large-scale spatial angle is easily realized. The pointing error of tracking device based on the two mirrors in the measurement system is studied, and the influence of different installation errors to the pointing error is analyzed. This research can lay a foundation for error allocation, calibration and compensation for the measurement system.

  19. Statistical Test for Bivariate Uniformity

    Directory of Open Access Journals (Sweden)

    Zhenmin Chen

    2014-01-01

    Full Text Available The purpose of the multidimension uniformity test is to check whether the underlying probability distribution of a multidimensional population differs from the multidimensional uniform distribution. The multidimensional uniformity test has applications in various fields such as biology, astronomy, and computer science. Such a test, however, has received less attention in the literature compared with the univariate case. A new test statistic for checking multidimensional uniformity is proposed in this paper. Some important properties of the proposed test statistic are discussed. As a special case, the bivariate statistic test is discussed in detail in this paper. The Monte Carlo simulation is used to compare the power of the newly proposed test with the distance-to-boundary test, which is a recently published statistical test for multidimensional uniformity. It has been shown that the test proposed in this paper is more powerful than the distance-to-boundary test in some cases.

  20. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  1. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    OpenAIRE

    Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf

    2014-01-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simula...

  2. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    OpenAIRE

    Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf

    2015-01-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate...

  3. ERROR PROCESSING METHOD OF CYCLOIDAL GEAR MEASUREMENT USING 3D COORDINATES MEASURING MACHINE

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    An error processing method is presented based on optimization theory and microcomputer technique which can be successfully used in the cycloidal gear measurement on three dimensional coordinates measuring machine (CMM). In the procedure, the minimum quadratic sum of the normal deviation is used as the object function and the equidistant curve is dealed with instead of the teeth profile. CMM is a high accurate measuring machine which can provide a way to evaluate the accuracy of the cycloidal gear completely.

  4. A simple bivariate count data regression model

    OpenAIRE

    Shiferaw Gurmu; John Elder

    2007-01-01

    This paper develops a simple bivariate count data regression model in which dependence between count variables is introduced by means of stochastically related unobserved heterogeneity components. Unlike existing commonly used bivariate models, we obtain a computationally simple closed form of the model with an unrestricted correlation pattern.

  5. Extreme behavior of bivariate elliptical distributions

    OpenAIRE

    Asimit, A. V.; Jones, B

    2007-01-01

    This paper exploits a stochastic representation of bivariate elliptical distributions in order to obtain asymptotic results which are determined by the tail behavior of the generator. Under certain specified assumptions, we present the limiting distribution of componentwise maxima, the limiting upper copula, and a bivariate version of the classical peaks over threshold result.

  6. Stochastic ordering of bivariate elliptical distributions

    OpenAIRE

    Landsman, Z; Tsanakas, A.

    2006-01-01

    It is shown that for elliptically distributed bivariate random vectors, the riskiness and dependence strength of random portfolios, in the sense of the univariate convex and bivariate concordance stochastic orders respectively, can be simply characterised in terms of the vector's Σ-matrix.

  7. Angle measurement error and compensation for decentration rotation of circular gratings

    Institute of Scientific and Technical Information of China (English)

    CHEN Xi-jun; WANG Zhen-huan; ZENG Qing-shuang

    2010-01-01

    As the geometric center of circular grating does not coincide with the rotation center,the angle measurement error of circular grating is analyzed.Based on the moire fringe equations in decentration condition,the mathematical model of angle measurement error is derived.It is concluded that the deeentration between the centre of circular grating and the center of revolving shaft leads to the first-harmonic error of angle measurement.The correctness of the result is proved by experimental data.The method of error compensation is presented,and the angle measurement accuracy of the circular grating is effectively improved by the error compensation.

  8. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Directory of Open Access Journals (Sweden)

    M. D. Wilson

    2014-08-01

    Full Text Available The Surface Water and Ocean Topography (SWOT mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash–Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  9. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2014-08-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  10. Bivariate ensemble model output statistics approach for joint forecasting of wind speed and temperature

    Science.gov (United States)

    Baran, Sándor; Möller, Annette

    2016-06-01

    Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.

  11. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System.

    Science.gov (United States)

    Du, Zhengchun; Wu, Zhaoyong; Yang, Jianguo

    2016-01-01

    The use of three-dimensional (3D) data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS). First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS. PMID:27213385

  12. Measurement of four-degree-of-freedom error motions based on non-diffracting beam

    Science.gov (United States)

    Zhai, Zhongsheng; Lv, Qinghua; Wang, Xuanze; Shang, Yiyuan; Yang, Liangen; Kuang, Zheng; Bennett, Peter

    2016-05-01

    A measuring method for the determination of error motions of linear stages based on non-diffracting beams (NDB) is presented. A right-angle prism and a beam splitter are adopted as the measuring head, which is fixed on the moving stage in order to sense the straightness and angular errors. Two CCDs are used to capture the NDB patterns that are carrying the errors. Four different types error s, the vertical straightness error and three rotational errors (the pitch, roll and yaw errors), can be separated and distinguished through theoretical analysis of the shift in the centre positions in the two cameras. Simulation results show that the proposed method using NDB can measure four-degrees-of-freedom errors for the linear stage.

  13. Pivot and cluster strategy: a preventive measure against diagnostic errors

    Directory of Open Access Journals (Sweden)

    Shimizu T

    2012-11-01

    Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing

  14. Measurement errors in retrospective reports of event histories : a validation study with Finnish register data

    OpenAIRE

    Pyy-Martikainen, Marjo; Rendtel, Ulrich

    2009-01-01

    "It is well known that retrospective survey reports of event histories are affected by measurement errors. Yet little is known about the determinants of measurement errors in event history data or their effects on event history analysis. Making use of longitudinal register data linked at person-level with longitudinal survey data, we provide novel evidence about 1. type and magnitude of measurement errors in survey reports of event histories, 2. validity of classical assumptions about measure...

  15. Neutron-induced soft error rate measurements in semiconductor memories

    International Nuclear Information System (INIS)

    Soft error rate (SER) testing of devices have been performed using the neutron beam at the Radiation Science and Engineering Center at Penn State University. The soft error susceptibility for different memory chips working at different technology nodes and operating voltages is determined. The effect of 10B on SER as an in situ excess charge source is observed. The effect of higher-energy neutrons on circuit operation will be published later. Penn State Breazeale Nuclear Reactor was used as the neutron source in the experiments. The high neutron flux allows for accelerated testing of the SER phenomenon. The experiments and analyses have been performed only on soft errors due to thermal neutrons. Various memory chips manufactured by different vendors were tested at various supply voltages and reactor power levels. The effect of 10B reaction caused by thermal neutron absorption on SER is discussed

  16. Microelectromechnical Systems Inertial Measurement Unit Error Modelling and Error Analysis for Low-cost Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    R. Ramalingam

    2009-11-01

    Full Text Available This paper presents error modelling and error analysis of microelectromechnical systems (MEMS inertial measurement unit (IMU for a low-cost strapdown inertial navigation system (INS. The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has been introduced to remove the high frequency noise that affects the sensors to obtain the original values of angular rates and accelerations with less noise. This increases the accuracy of the strapdown INS. The results show the effect of errors in the output of sensors, easy interpretation of random errors by Allan variance, the increase in the accuracy when wavelet decomposition is used for denoising inertial sensor raw data.Defence Science Journal, 2009, 59(6, pp.650-658, DOI:http://dx.doi.org/10.14429/dsj.59.1571

  17. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  18. Integrated Geometric Errors Simulation, Measurement and Compensation of Vertical Machining Centres

    OpenAIRE

    Gohel, C.K.; Makwana, A.H.

    2014-01-01

    This Paper presents research on geometric errors of simulated geometry which are measured and compensate in vertical machining centres. There are many errors in CNC machine tools have effect on the accuracy and repeatability of manufacture. Most of these errors are based on specific parameters such as the strength and the stress, the dimensional deviations of the structure of the machine tool, thermal variations, cutting force induced errors and tool wear. In this paper machining system that ...

  19. Information-theoretic approach to quantum error correction and reversible measurement

    CERN Document Server

    Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard

    1997-01-01

    Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.

  20. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    OpenAIRE

    Ivan M Roitt; Torben Lund; Callaghan, Martina F.; Richard H Bayford

    2010-01-01

    Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. ...

  1. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  2. The bivariate Power-Normal and the bivariate Johnson’s System bounded distribution in forestry, including height curves

    OpenAIRE

    Mønness, Erik Neslein

    2015-01-01

    bivariate diameter and height distribution yields a unified model of a forest stand. The bivariate Johnson’s System bounded distribution and the bivariate power-normal distribution are explored. The power-normal originates from the well-known Box-Cox transformation. As evaluated by the bivariate Kolmogorov-Smirnov distance, the bivariate power-normal distribution seems to be superior to the bivariate Johnson’s System bounded distribution. The conditional median height given the diameter is...

  3. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α2) QED correction in leading-log approximation. (J.P.N.)

  4. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  5. Systematic errors in cosmic microwave background polarization measurements

    CERN Document Server

    O'Dea, D; Johnson, B R; Dea, Daniel O'; Challinor, Anthony

    2006-01-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Muller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors throug...

  6. Methodical errors of measurement of the human body tissues electrical parameters

    OpenAIRE

    Antoniuk, O.; Pokhodylo, Y.

    2015-01-01

    Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.

  7. Error analysis of rigid body posture measurement system based on circular feature points

    Science.gov (United States)

    Huo, Ju; Cui, Jishan; Yang, Ning

    2015-02-01

    For monocular vision pose parameters determine the problem, feature-based target feature points on the plane quadrilateral, an improved two-stage iterative algorithm is proposed to improve the optimization of rigid body posture measurement calculating model. Monocular vision rigid body posture measurement system is designed; experimentally in each coordinate system determined coordinate a unified method to unify the each feature point measure coordinates; theoretical analysis sources of error from rigid body posture measurement system simulation experiments. Combined with the actual experimental analysis system under the condition of simulation error of pose accuracy of measurement, gives the comprehensive error of measurement system, for improving measurement precision of certain theoretical guiding significance.

  8. A new bivariate negative binomial regression model

    Science.gov (United States)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  9. Constructions for a bivariate beta distribution

    OpenAIRE

    Olkin, Ingram; Trikalinos, Thomas A.

    2014-01-01

    The beta distribution is a basic distribution serving several purposes. It is used to model data, and also, as a more flexible version of the uniform distribution, it serves as a prior distribution for a binomial probability. The bivariate beta distribution plays a similar role for two probabilities that have a bivariate binomial distribution. We provide a new multivariate distribution with beta marginal distributions, positive probability over the unit square, and correlations over the full ...

  10. Stationarity of Bivariate Dynamic Contagion Processes

    OpenAIRE

    Angelos Dassios; Xin Dong

    2014-01-01

    The Bivariate Dynamic Contagion Processes (BDCP) are a broad class of bivariate point processes characterized by the intensities as a general class of piecewise deterministic Markov processes. The BDCP describes a rich dynamic structure where the system is under the influence of both external and internal factors modelled by a shot-noise Cox process and a generalized Hawkes process respectively. In this paper we mainly address the stationarity issue for the BDCP, which is important in applica...

  11. Bivariate Interpolation by Splines and Approximation Order

    OpenAIRE

    Nürnberger, Günther

    1996-01-01

    We construct Hermite interpolation sets for bivariate spline spaces of arbitrary degree and smoothness one on non-rectangular domains with uniform type triangulations. This is done by applying a general method for constructing Lagrange interpolation sets for bivariate spline spaecs of arbitrary degree and smoothness. It is shown that Hermite interpolation yields (nearly) optimal approximation order. Applications to data fitting problems and numerical examples are given.

  12. Some theory of bivariate risk attitude

    OpenAIRE

    Marta Cardin; Paola Ferretti

    2004-01-01

    In past years the study of the impact of risk attitude among risks has become a major topic, in particular in Decision Sciences. Subsequently the attention was devoted to the more general case of bivariate random variables. The first approach to multivariate risk aversion was proposed by de Finetti (1952) and Richard (1975) and it is related to the bivariate case. More recently, multivariate risk aversion has been studied by Scarsini (1985, 1988, 1999). Nevertheless even if decision problems ...

  13. Correlated Biomarker Measurement Error: An Important Threat to Inference in Environmental Epidemiology

    OpenAIRE

    Pollack, A. Z.; Perkins, N.J.; Mumford, S. L.; A. Ye; Schisterman, E.F.

    2012-01-01

    Utilizing multiple biomarkers is increasingly common in epidemiology. However, the combined impact of correlated exposure measurement error, unmeasured confounding, interaction, and limits of detection (LODs) on inference for multiple biomarkers is unknown. We conducted data-driven simulations evaluating bias from correlated measurement error with varying reliability coefficients (R), odds ratios (ORs), levels of correlation between exposures and error, LODs, and interactions. Blood cadmium a...

  14. Compensation method for the alignment angle error of a gear axis in profile deviation measurement

    International Nuclear Information System (INIS)

    In the precision measurement of involute helical gears, the alignment angle error of a gear axis, which was caused by the assembly error of a gear measuring machine, will affect the measurement accuracy of profile deviation. A model of the involute helical gear is established under the condition that the alignment angle error of the gear axis exists. Based on the measurement theory of profile deviation, without changing the initial measurement method and data process of the gear measuring machine, a compensation method is proposed for the alignment angle error of the gear axis that is included in profile deviation measurement results. Using this method, the alignment angle error of the gear axis can be compensated for precisely. Some experiments that compare the residual alignment angle error of a gear axis after compensation for the initial alignment angle error were performed to verify the accuracy and feasibility of this method. Experimental results show that the residual alignment angle error of a gear axis included in the profile deviation measurement results is decreased by more than 85% after compensation, and this compensation method significantly improves the measurement accuracy of the profile deviation of involute helical gear. (paper)

  15. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  16. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    Science.gov (United States)

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  17. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    Science.gov (United States)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  18. Zirconia thickness measurements for irradiated fuel rods: an approach to better understanding measurement error

    International Nuclear Information System (INIS)

    Non-destructive examinations (NDE) on irradiated PWR fuel rods have been performed since 1992 at the CEA/Cadarache Research Centre. Among the different controls performed, measurement of the zirconia thickness provides useful information on the axial and angular distribution of corrosion down the rod. This is necessary to compare the sensitivity of different cladding types with the creation of zirconia, as well as to detect and measure effects such as local corrosion. A dedicated apparatus based on eddy currents was used to measure the zirconia thicknesses. To verify the accuracy of our measurements, we compared the measurement results with the metallographic measurements of 39 samples. It was observed that the non-destructive measurements always overestimated the thickness of zirconia. The mean value of this systematic error was about 4 μm. We therefore tried to identify the origin of this error. We first observed that the sensor position was crucial. It must be in the exact same position for both the standard (tube section) and the rods. A poorly-positioned sensor on the rod produces overestimated measurement values. Other sources of uncertainty may also explain the difference with the exact values: first, the cladding of the standard was not irradiated. We know that some physical characteristics of cladding change during irradiation, in particular electrical conductivity. We do not know how this affects our measurement. Secondly, the rods still contained some decay heat. Thus, the temperature of the rod cladding could differ from the temperature of the standard. The electrical conductivity of the cladding and thus the eddy current response could be different. The sensor itself could also be affected by the temperature. We have performed several experiments on both heated cladding (not irradiated) and irradiated PWR fuel rods inside the hot cell. Based on the results of these tests and in agreement with our feedback, it was found that the device used in the

  19. Zirconia thickness measurements for irradiated fuel rods: an approach to better understanding measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Lacroix, B.; Martella, T.; Pras, M.; Masson-Fauchier, M. [CEA/DEN/CAD/DEC/SA3C/Legend (France); Fayette, L. [CEA/DEN/CAD/DEC/SA3C/LEMCI (France)

    2011-07-01

    Non-destructive examinations (NDE) on irradiated PWR fuel rods have been performed since 1992 at the CEA/Cadarache Research Centre. Among the different controls performed, measurement of the zirconia thickness provides useful information on the axial and angular distribution of corrosion down the rod. This is necessary to compare the sensitivity of different cladding types with the creation of zirconia, as well as to detect and measure effects such as local corrosion. A dedicated apparatus based on eddy currents was used to measure the zirconia thicknesses. To verify the accuracy of our measurements, we compared the measurement results with the metallographic measurements of 39 samples. It was observed that the non-destructive measurements always overestimated the thickness of zirconia. The mean value of this systematic error was about 4 {mu}m. We therefore tried to identify the origin of this error. We first observed that the sensor position was crucial. It must be in the exact same position for both the standard (tube section) and the rods. A poorly-positioned sensor on the rod produces overestimated measurement values. Other sources of uncertainty may also explain the difference with the exact values: first, the cladding of the standard was not irradiated. We know that some physical characteristics of cladding change during irradiation, in particular electrical conductivity. We do not know how this affects our measurement. Secondly, the rods still contained some decay heat. Thus, the temperature of the rod cladding could differ from the temperature of the standard. The electrical conductivity of the cladding and thus the eddy current response could be different. The sensor itself could also be affected by the temperature. We have performed several experiments on both heated cladding (not irradiated) and irradiated PWR fuel rods inside the hot cell. Based on the results of these tests and in agreement with our feedback, it was found that the device used in the

  20. Validation of Large-Scale Geophysical Estimates Using In Situ Measurements with Representativeness Error

    Science.gov (United States)

    Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.

    2015-12-01

    Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.

  1. Compensation method for the alignment angle error in pitch deviation measurement

    Science.gov (United States)

    Liu, Yongsheng; Fang, Suping; Wang, Huiyi; Taguchi, Tetsuya; Takeda, Ryohei

    2016-05-01

    When measuring the tooth flank of an involute helical gear by gear measuring center (GMC), the alignment angle error of a gear axis, which was caused by the assembly error and manufacturing error of the GMC, will affect the measurement accuracy of pitch deviation of the gear tooth flank. Based on the model of the involute helical gear and the tooth flank measurement theory, a method is proposed to compensate the alignment angle error that is included in the measurement results of pitch deviation, without changing the initial measurement method of the GMC. Simulation experiments are done to verify the compensation method and the results show that after compensation, the alignment angle error of the gear axis included in measurement results of pitch deviation declines significantly, more than 90% of the alignment angle errors are compensated, and the residual alignment angle errors in pitch deviation measurement results are less than 0.1 μm. It shows that the proposed method can improve the measurement accuracy of the GMC when measuring the pitch deviation of involute helical gear.

  2. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  3. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    Science.gov (United States)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  4. Errors in anthropometric measurements in neonates and infants

    Directory of Open Access Journals (Sweden)

    D Harrison

    2001-09-01

    Full Text Available The accuracy of methods used in Cape Town hospitals and clinics for the measurement of weight, length and age in neonates and infants became suspect during a survey of 12 local authority and 5 private sector clinics in 1994-1995 (Harrison et al. 1998. A descriptive prospective study to determine the accuracy of these methods in neonates at four maternity hospitals [ 2 public and 2 private] and infants at four child health clinics of the Cape Town City Council was carried out. The main outcome measures were an assessment of three currently used methods namely to measure crown-heel length with a measuring board, a mat and a tape measure; a comparison of weight differences when an infant is fully clothed, naked and in napkin only; and the differences in age estimated by calendar dates and by a specially designed electronic calculator. The results showed that the current methods which are used to measure infants in Cape Town vary widely from one institution to another. Many measurements are inaccurate and there is a real need for uniformity and accuracy. This can only be implemented by an effective education program so as to ensure that accurate measurements are used in monitoring the health of young children in Cape Town and elsewhere.

  5. Quantifying Error in Survey Measures of School and Classroom Environments

    Science.gov (United States)

    Schweig, Jonathan David

    2014-01-01

    Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…

  6. The effect of measurement error on the dose-response curve.

    OpenAIRE

    Yoshimura, I

    1990-01-01

    In epidemiological studies for an environmental risk assessment, doses are often observed with errors. However, they have received little attention in data analysis. This paper studies the effect of measurement errors on the observed dose-response curve. Under the assumptions of the monotone likelihood ratio on errors and a monotone increasing dose-response curve, it is verified that the slope of the observed dose-response curve is likely to be gentler than the true one. The observed variance...

  7. Design and application of location error teaching aids in measuring and visualization

    OpenAIRE

    Yu Fengning; Li Lei; Guo Jian; Mai Songman; Shi Jiashun

    2015-01-01

    As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the positi...

  8. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    OpenAIRE

    Shi Qiang Liu; Rong Zhu

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily...

  9. Measurement error in environmental exposures: Statistical implications for spatial air pollution models and gene environment interaction tests

    OpenAIRE

    Ackerman-Alexeeff, Stacey Elizabeth

    2013-01-01

    Measurement error is an important issue in studies of environmental epidemiology. We considered the effects of measurement error in environmental covariates in several important settings affecting current public health research. Throughout this dissertation, we investigate the impacts of measurement error and consider statistical methodology to fix that error.

  10. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    Science.gov (United States)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  11. A simple approximation to the bivariate normal distribution with large correlation coefficient

    OpenAIRE

    Albers, Willem; Kallenberg, Wilbert C.M.

    1994-01-01

    The bivariate normal distribution function is approximated with emphasis on situations where the correlation coefficient is large. The high accuracy of the approximation is illustrated by numerical examples. Moreover, exact upper and lower bounds are presented as well as asymptotic results on the error terms.

  12. Discrete time interval measurement system: fundamentals, resolution and errors in the measurement of angular vibrations

    International Nuclear Information System (INIS)

    The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement

  13. Measurement and analysis of typical motion error traces from a circular test

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The circular test provides a rapid and efficient way of measuring the contouring accuracy of a machine tool.To get the actual point coordinate in the work plane,an improved measurement instrument - a new ball bar test system - is presented in this paper to identify both the radial error and the rotation angle error when the machine is manipulated to move in circular traces.Based on the measured circular error,a combination of Fourier components is chosen to represent the systematic form error that fluctuates in the radial direction.The typical motion errors represented by the corresponding Fourier components can thus be identified.The values for machine compensation can be calculated and adjusted until the desired results are achieved.

  14. Comparing objective and subjective error measures for color constancy

    NARCIS (Netherlands)

    M.P. Lucassen; A. Gijsenij; T. Gevers

    2008-01-01

    We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color correc

  15. Total Differential Errors in One-Port Network Analyzer Measurements with Application to Antenna Impedance

    Directory of Open Access Journals (Sweden)

    P. Zimourtopoulos

    2007-06-01

    Full Text Available The objective was to study uncertainty in antenna input impedance resulting from full one-port Vector Network Analyzer (VNA measurements. The VNA process equation in the reflection coefficient ρ of a load, its measurement m and three errors Es, determinable from three standard loads and their measurements, was considered. Differentials were selected to represent measurement inaccuracies and load uncertainties (Differential Errors. The differential operator was applied on the process equation and the total differential error dρ for any unknown load (Device Under Test DUT was expressed in terms of dEs and dm, without any simplification. Consequently, the differential error of input impedance Z -or any other physical quantity differentiably dependent on ρ- is expressible. Furthermore, to express precisely a comparison relation between complex differential errors, the geometric Differential Error Region and its Differential Error Intervals were defined. Practical results are presented for an indoor UHF ground-plane antenna in contrast with a common 50 Ω DC resistor inside an aluminum box. These two built, unshielded and shielded, DUTs were tested against frequency under different system configurations and measurement considerations. Intermediate results for Es and dEs characterize the measurement system itself. A number of calculations and illustrations demonstrate the application of the method.

  16. An inquisition into bivariate threshold effects in the inflation-growth correlation: Evaluating South Africa’s macroeconomic objectives

    OpenAIRE

    Andrew Phiri

    2013-01-01

    Is the SARB’s inflation target of 3-6% compatible with the 6% economic growth objective set by ASGISA? Estimations of inflation-growth bivariate Threshold Vector Autoregressive with corresponding bivariate Threshold Vector Error Correction (BTVEC-BTVAR) econometric models for sub-periods coupled with the South African inflation-growth experience between 1960 and 2010; suggest on optimal inflation-growth combinations for South African data presenting a two-fold proposition. Firstly, for the pe...

  17. Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement

    Institute of Scientific and Technical Information of China (English)

    WANG Jiongqi; XIONG Kai; ZHOU Haiyin

    2012-01-01

    The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.

  18. Conservative error measures for classical and quantum metrology

    CERN Document Server

    Tsang, Mankei

    2016-01-01

    The classical and quantum Cram\\'er-Rao bounds have become standard measures of parameter-estimation uncertainty for a variety of sensing and imaging applications in recent years, but their assumption of unbiased estimators potentially undermines their significance as fundamental limits. In this note we advocate a Bayesian approach with Van Trees inequalities and worst-case priors to overcome the problem. Applications to superlocalization and gravitational-wave parameter estimation are discussed.

  19. Some Physical Errors of X-Ray Texture Measurements

    OpenAIRE

    Perlovich, Yu.

    1996-01-01

    Typical experimental situations of texture measurements are demonstrated involving failure to take into account some physical factors responsible for an inadequate texture description or imaginary texture changes. Among these factors there are inevitable texture inhomogeneities, inhomogeneous distribution of defects in deformed metal materials and resulting inhomogeneous lattice perfection by their heat treatment. It is shown that a formal approach to texture analysis does not allow to reveal...

  20. The Impact of Truancy on Educational Attainment : A Bivariate Ordered Probit Estimator with Mixed Effects

    OpenAIRE

    Buscha, Franz; Conte, Anna

    2010-01-01

    This paper investigates the relationship between educational attainment and truancy. Using data from the Youth Cohort Study of England and Wales, we estimate the causal impact that truancy has on educational attainment at age 16. Problematic is that both truancy and attainment are measured as ordered responses requiring a bivariate ordered probit model to account for the potential endogeneity of truancy. Furthermore, we extent the 'naïve' bivariate ordered probit estimator to include mixed ef...

  1. Incomplete Bivariate Fibonacci and Lucas -Polynomials

    Directory of Open Access Journals (Sweden)

    Dursun Tasci

    2012-01-01

    Full Text Available We define the incomplete bivariate Fibonacci and Lucas -polynomials. In the case =1, =1, we obtain the incomplete Fibonacci and Lucas -numbers. If =2, =1, we have the incomplete Pell and Pell-Lucas -numbers. On choosing =1, =2, we get the incomplete generalized Jacobsthal number and besides for =1 the incomplete generalized Jacobsthal-Lucas numbers. In the case =1, =1, =1, we have the incomplete Fibonacci and Lucas numbers. If =1, =1, =1, =⌊(−1/(+1⌋, we obtain the Fibonacci and Lucas numbers. Also generating function and properties of the incomplete bivariate Fibonacci and Lucas -polynomials are given.

  2. A simulation study of lognormal measurement error effect: Discrimination problem of radon and thoron

    International Nuclear Information System (INIS)

    Several case-control studies have indicated an increased risk of lung cancer linked to indoor radon exposure. For a precise evaluation of radon-related lung cancer risk, however, contribution of thoron should be considered. There are a lot of studies in which passive type radon detectors are used without thoron discrimination techniques. Therefore, these passive type radon detectors may be strongly affected to the presence of thoron in case they are installed near a wall or floor as potential thoron sources. The thoron effect we consider here is an increase of radon signal in the radon detectors without the discrimination technique. This problem is classified as a part of measurement error problem in statistical literatures and the thoron is considered as a possible source of measurement error. In general, concentrations of radon and thoron follow a lognormal distribution. The effects of measurement error following normal distribution have been studied well, but there are few studies on measurement error following non-normal distribution. In order to evaluate the effect of measurement error due to thoron, we conducted a simulation study. We assumed a case-control study of lung cancer and indoor radon with hypothetical data where radon concentrations were measured with and without discrimination of thoron concentrations. We also assumed that logistic regression was used, and that the concentrations of radon and thoron were correlated, following lognormal distribution. The thoron disturbance in radon measurement resulted in an approximately 90% downward bias in the effect of radon, and this bias was almost constant when the parameters were varied. The downward bias was consistent with results from previous studies taking measurement error following normal distribution into account. It was confirmed that the effect of lognormal measurement error is concordant with the normal measurement error in this case. (author)

  3. From Measurements Errors to a New Strain Gauge Design

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco;

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods...... such as clip-on extensometers. In the present work, this has been quantified through a numerical study for three different strain gauges. In addition, a significant effect of a thin polymer coating or biaxial layer in the erroneous using strain gauges has been observed. An erroneous which can be...

  4. Regression calibration for classical exposure measurement error in environmental epidemiology studies using multiple local surrogate exposures.

    Science.gov (United States)

    Bateson, Thomas F; Wright, J Michael

    2010-08-01

    Environmental epidemiologic studies are often hierarchical in nature if they estimate individuals' personal exposures using ambient metrics. Local samples are indirect surrogate measures of true local pollutant concentrations which estimate true personal exposures. These ambient metrics include classical-type nondifferential measurement error. The authors simulated subjects' true exposures and their corresponding surrogate exposures as the mean of local samples and assessed the amount of bias attributable to classical and Berkson measurement error on odds ratios, assuming that the logit of risk depends on true individual-level exposure. The authors calibrated surrogate exposures using scalar transformation functions based on observed within- and between-locality variances and compared regression-calibrated results with naive results using surrogate exposures. The authors further assessed the performance of regression calibration in the presence of Berkson-type error. Following calibration, bias due to classical-type measurement error, resulting in as much as 50% attenuation in naive regression estimates, was eliminated. Berkson-type error appeared to attenuate logistic regression results less than 1%. This regression calibration method reduces effects of classical measurement error that are typical of epidemiologic studies using multiple local surrogate exposures as indirect surrogate exposures for unobserved individual exposures. Berkson-type error did not alter the performance of regression calibration. This regression calibration method does not require a supplemental validation study to compute an attenuation factor. PMID:20573838

  5. Statistical estimation of flaw size measurement errors for steam generator inspection tools

    International Nuclear Information System (INIS)

    Accurate sizing of flaws in steam generator tubes is a critical component of a maintenance and inspection program. Knowledge of the measurement error of an inspection tool is important for determining the flaw severity, assessing the tool vendors claimed accuracy, as a component of the tools development program, as a reference for other inspection tools or candidates, and for probability of detection assessments. Often, tool reporting of flaw sizes is compared with data obtained from a reference tool or from a destructive test. The reference tool or destructive test data are generally assumed free from measurement errors but in a true sense they may not be so. It is therefore an advantageous and useful exercise to determine individually the measurement errors of each measuring system used. Statistical procedures commonly used to assess the quality of inspection tools estimate the total scatter between any two instruments, neglecting repeated measurements or more than two tool situations. One possible advancement is the statistical decomposition of the total scatter given by the root mean square error/differential into measurement error components to be associated with each of the instruments used. Important recent developments, such as Bayesian estimators of measurement errors, are included in this expository article to familiarize the researchers working in the nuclear industry with the state-of-the-art. (author)

  6. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  7. Analysis of measurement errors influence on experimental determination of mass and heat transfer coefficient

    International Nuclear Information System (INIS)

    The influence of temperature and concentration measurement errors on experimental determination of mass and heat transfer coefficients is analysed. Calculus model of coefficients and of measurement errors, the experimental data obtained on the water isotopic distillation plant and the results of determinations are presented. The experimental distillation column, with inner diameter of 108 mm, have been equipped with B7 structured packing on a height of 14 m. This column offers the possibility to measure vapour temperature and isotopic concentration in 12 locations. For error propagation analysis, the parameters measured for each packing bed, namely temperature and isotopic concentration of the vapour, were used. A relation for calculation of maximum error of experimental determinations of mass and heat transoprt coefficients is given. The experimental data emphasize the 'ending effects' and regions with bad thermal insulation. (author)

  8. Identification in a Generalization of Bivariate Probit Models with Endogenous Regressors

    OpenAIRE

    Sukjin Han; Edward J. Vytlacil

    2013-01-01

    This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure, including the normal copula as a special case, while the marginal distributions of the latent error terms are allowed to be arbitrary but known. This class of models includes bivariate probit models as a special case. The paper demonstrates that a...

  9. New measurements of coil-related magnetic field errors on DIII-D

    International Nuclear Information System (INIS)

    Non-axisymmetric (error) fields in tokamaks lead to a number of instabilities including so-called locked modes [J.T. Scoville, R.J. La Haye, Nucl. Fusion 43 (4) (2003) 250-257] and resistive wall modes (RWM) [A.M. Garofab, R.J. La Haye, J.T. Scoville, Nucl. Fusion 42 (11) (2002) 1335-1339] and subsequent loss of confinement. They can also cause errors in magnetic measurements made by point probes near the plasma edge, error in measurements made by magnetic field sensitive diagnostics, and they violate the assumption of axisymmetry in the analysis of data. Most notably, the sources of these error fields include shifts and tilts in the coil positions from ideal, coil leads, and nearby ferromagnetic materials excited by the coils. New measurements have been made of the n=1 coil-related field errors in the DIII-D plasma chamber. These measurements indicate that the errors due to the plasma shaping coil system are smaller than previously reported and no additional sources of anomalous fields were identified. Thus they fail to support the suggestion of an additional significant error field suggested by locked mode and RWM experiments

  10. Why Do Increased Arrest Rates Appear to Reduce Crime: Deterrence, Incapacitation, or Measurement Error?

    OpenAIRE

    Steven D. Levitt

    1995-01-01

    A strong, negative empirical correlation exists between arrest rates and reported crime rates. While this relationship has often been interpreted as support for the deterrence hypothesis, it is equally consistent with incapacitation effects, and/or a spurious correlation that would be induced by measurement error in reported crime rates. This paper attempts to discriminate between deterrence, incapacitation, and measurement error as explanations for the empirical relationship between arrest r...

  11. Does adjustment for measurement error induce positive bias if there is no true association?

    OpenAIRE

    Burstyn, Igor

    2009-01-01

    This article is a response to an off-the-record discussion that I had at an international meeting of epidemiologists. It centered on a concern, perhaps widely spread, that measurement error adjustment methods can induce positive bias in results of epidemiological studies when there is no true association. I trace the possible history of this supposition and test it in a simulation study of both continuous and binary health outcomes under a classical multiplicative measurement error model. A B...

  12. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  13. Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor

    OpenAIRE

    Jian-Neng Wang; Jaw-Luen Tang

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.0...

  14. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  15. COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE

    Directory of Open Access Journals (Sweden)

    A. K. Tyavlovsky

    2013-01-01

    Full Text Available The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the second and the first harmonics’ amplitudes.

  16. COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE

    OpenAIRE

    Tyavlovsky, A. K.; A. L. Zharin

    2015-01-01

    The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the se...

  17. Bivariate dynamic probit models for panel data

    OpenAIRE

    Alfonso Miranda

    2010-01-01

    In this talk, I will discuss the main methodological features of the bivariate dynamic probit model for panel data. I will present an example using simulated data, giving special emphasis to the initial conditions problem in dynamic models and the difference between true and spurious state dependence. The model is fit by maximum simulated likelihood.

  18. A truncated bivariate inverted Dirichlet distribution

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2013-05-01

    Full Text Available A truncated version of the bivariate inverted dirichlet distribution is introduced. Unlike the inverted dirichlet distribution, this possesses finite moments of all orders and could therefore be a better model for certain practical situations. One such situation is discussed. The moments, maximum likelihood estimators and the Fisher information matrix for the truncated distribution are derived.

  19. The Resultant of an Unmixed Bivariate System

    OpenAIRE

    Khetan, Amit

    2002-01-01

    This paper gives an explicit method for computing the resultant of any sparse unmixed bivariate system with given support. We construct square matrices whose determinant is exactly the resultant. The matrices constructed are of hybrid Sylvester and B\\'ezout type. We make use of the exterior algebra techniques of Eisenbud, Fl{\\o}ystad, and Schreyer.

  20. Five-Parameter Bivariate Probability Distribution

    Science.gov (United States)

    Tubbs, J.; Brewer, D.; Smith, O. W.

    1986-01-01

    NASA technical memorandum presents four papers about five-parameter bivariate gamma class of probability distributions. With some overlap of subject matter, papers address different aspects of theories of these distributions and use in forming statistical models of such phenomena as wind gusts. Provides acceptable results for defining constraints in problems designing aircraft and spacecraft to withstand large wind-gust loads.

  1. BIVARIATE REAL-VALUED ORTHOGONAL PERIODIC WAVELETS

    Institute of Scientific and Technical Information of China (English)

    Qiang Li; Xuezhang Liang

    2005-01-01

    In this paper, we construct a kind of bivariate real-valued orthogonal periodic wavelets. The corresponding decomposition and reconstruction algorithms involve only 8 terms respectively which are very simple in practical computation. Moreover, the relation between periodic wavelets and Fourier series is also discussed.

  2. The holographic reconstructing algorithm and its error analysis about phase-shifting phase measurement

    Institute of Scientific and Technical Information of China (English)

    LU Xiaoxu; ZHONG Liyun; ZHANG Yimo

    2007-01-01

    Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.

  3. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  4. An AFM-based methodology for measuring axial and radial error motions of spindles

    International Nuclear Information System (INIS)

    This paper presents a novel atomic force microscopy (AFM)-based methodology for measurement of axial and radial error motions of a high precision spindle. Based on a modified commercial AFM system, the AFM tip is employed as a cutting tool by which nano-grooves are scratched on a flat surface with the rotation of the spindle. By extracting the radial motion data of the spindle from the scratched nano-grooves, the radial error motion of the spindle can be calculated after subtracting the tilting errors from the original measurement data. Through recording the variation of the PZT displacement in the Z direction in AFM tapping mode during the spindle rotation, the axial error motion of the spindle can be obtained. Moreover the effects of the nano-scratching parameters on the scratched grooves, the tilting error removal method for both conditions and the method of data extraction from the scratched groove depth are studied in detail. The axial error motion of 124 nm and the radial error motion of 279 nm of a commercial high precision air bearing spindle are achieved by this novel method, which are comparable with the values provided by the manufacturer, verifying this method. This approach does not need an expensive standard part as in most conventional measurement approaches. Moreover, the axial and radial error motions of the spindle can both be obtained, indicating that this is a potential means of measuring the error motions of the high precision moving parts of ultra-precision machine tools in the future. (paper)

  5. Research on geometric error measurement system of machining center BV75 based on laser interferometer

    OpenAIRE

    Shi Lijuan; Chen Jinying; Li Zhaokun; Bian Huamei

    2016-01-01

    This paper measures the twenty one geometric errors of numerical control machining center with parameter identification through laser interferometer, the main contents illustrate the measurement system, measurement model and some testing results combined with specific experimental conditions, at the same time provide particular reference value on numerical control machine tool(NC machine tool) geometric precision detection.

  6. Research on geometric error measurement system of machining center BV75 based on laser interferometer

    Directory of Open Access Journals (Sweden)

    Shi Lijuan

    2016-01-01

    Full Text Available This paper measures the twenty one geometric errors of numerical control machining center with parameter identification through laser interferometer, the main contents illustrate the measurement system, measurement model and some testing results combined with specific experimental conditions, at the same time provide particular reference value on numerical control machine tool(NC machine tool geometric precision detection.

  7. Bivariate Poisson and Diagonal Inflated Bivariate Poisson Regression Models in R

    OpenAIRE

    Dimitris Karlis; Ioannis Ntzoufras

    2005-01-01

    In this paper we present an R package called bivpois for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models. An Expectation-Maximization (EM) algorithm is implemented. Inflated models allow for modelling both over-dispersion (or under-dispersion) and negative correlation and thus they are appropriate for a wide range of applications. Extensions of the algorithms for several other models are also discussed. Detailed guidance a...

  8. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  9. The Minkowski dimension of the bivariate fractal interpolation surfaces

    International Nuclear Information System (INIS)

    We present a new construction of continuous bivariate fractal interpolation surface for every set of data. Furthermore, we generalize this construction to higher dimensions. Exact values for the Minkowski dimension of the bivariate fractal interpolation surfaces are obtained

  10. On some properties on bivariate Fibonacci and Lucas polynomials

    OpenAIRE

    Belbachir, Hacéne; Bencherif, Farid

    2007-01-01

    In this paper we generalize to bivariate polynomials of Fibonacci and Lucas, properties obtained for Chebyshev polynomials. We prove that the coordinates of the bivariate polynomials over appropriate basis are families of integers satisfying remarkable recurrence relations.

  11. Measurement error in environmental epidemiology and the shape of exposure-response curves.

    Science.gov (United States)

    Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E

    2011-09-01

    Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health. PMID:21823979

  12. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  13. Dyadic Bivariate Fourier Multipliers for Multi-Wavelets in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhongyan Li∗; Xiaodi Xu

    2015-01-01

    The single 2 dilation orthogonal wavelet multipliers in one dimensional case and single A-dilation (where A is any expansive matrix with integer entries and|detA|=2) wavelet multipliers in high dimensional case were completely characterized by the Wutam Consortium (1998) and Z. Y. Li, et al. (2010). But there exist no more results on orthogonal multivariate wavelet matrix multipliers corresponding integer expansive dilation matrix with the absolute value of determinant not 2 in L2(R2). In this paper, we choose as the dilation matrix and consider the 2I2-dilation orthogonal multivariate wavelet Y={y1,y2,y3}, (which is called a dyadic bivariate wavelet) multipliers. We call the 3×3 matrix-valued function A(s)=[ fi,j(s)]3×3, where fi,j are measurable functions, a dyadic bivariate matrix Fourier wavelet multiplier if the inverse Fourier transform of A(s)(cy1(s),cy2(s),cy3(s))⊤ = ( b g1(s), b g2(s), b g3(s))⊤ is a dyadic bivariate wavelet whenever (y1,y2,y3) is any dyadic bivariate wavelet. We give some conditions for dyadic matrix bivariate wavelet multipliers. The results extended that of Z. Y. Li and X. L. Shi (2011). As an application, we construct some useful dyadic bivariate wavelets by using dyadic Fourier matrix wavelet multipliers and use them to image denoising.

  14. An in-process form error measurement system for precision machining

    International Nuclear Information System (INIS)

    In-process form error measurement for precision machining is studied. Due to two key problems, opaque barrier and vibration, the study of in-process form error optical measurement for precision machining has been a hard topic and so far very few existing research works can be found. In this project, an in-process form error measurement device is proposed to deal with the two key problems. Based on our existing studies, a prototype system has been developed. It is the first one of the kind that overcomes the two key problems. The prototype is based on a single laser sensor design of 50 nm resolution together with two techniques, a damping technique and a moving average technique, proposed for use with the device. The proposed damping technique is able to improve vibration attenuation by up to 21 times compared to the case of natural attenuation. The proposed moving average technique is able to reduce errors by seven to ten times without distortion to the form profile results. The two proposed techniques are simple but they are especially useful for the proposed device. For a workpiece sample, the measurement result under coolant condition is only 2.5% larger compared with the one under no coolant condition. For a certified Wyko test sample, the overall system measurement error can be as low as 0.3 µm. The measurement repeatability error can be as low as 2.2%. The experimental results give confidence in using the proposed in-process form error measurement device. For better results, further improvement in design and tests are necessary

  15. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    Science.gov (United States)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  16. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias...... errors can be largely compensated for by an absolute calibration of the transducers and inverse filtering that results in very small residual errors. Experimental results of this study indicate that these uncertainties will be in the order of one percent with respect to amplitude and two tenth of a...

  17. Techniques for reducing error in the calorimetric measurement of low wattage items

    Energy Technology Data Exchange (ETDEWEB)

    Sedlacek, W.A.; Hildner, S.S.; Camp, K.L.; Cremers, T.L.

    1993-08-01

    The increased need for the measurement of low wattage items with production calorimeters has required the development of techniques to maximize the precision and accuracy of the calorimeter measurements. An error model for calorimetry measurements is presented. This model is used as a basis for optimizing calorimetry measurements through baseline interpolation. The method was applied to the heat measurement of over 100 items and the results compared to chemistry assay and mass spectroscopy.

  18. Effect of patient positions on measurement errors of the knee-joint space on radiographs

    Science.gov (United States)

    Gilewska, Grazyna

    2001-08-01

    Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.

  19. On Bivariate Generalized Exponential-Power Series Class of Distributions

    OpenAIRE

    Jafari, Ali Akbar; Roozegar, Rasool

    2015-01-01

    In this paper, we introduce a new class of bivariate distributions by compounding the bivariate generalized exponential and power-series distributions. This new class contains some new sub-models such as the bivariate generalized exponential distribution, the bivariate generalized exponential-poisson, -logarithmic, -binomial and -negative binomial distributions. We derive different properties of the new class of distributions. The EM algorithm is used to determine the maximum likelihood estim...

  20. Characterizations of some bivariate models using reciprocal coordinate subtangents

    OpenAIRE

    Sreenarayanapurath Madhavan Sunoj; Sreejith Thoppil Bhargavan; Jorge Navarro

    2014-01-01

    In the present paper, we consider the bivariate version of reciprocal coordinate subtangent (RCST) and study its usefulness in characterizing some important bivariate models.  In particular, characterization results are proved for a general bivariate model whose conditional distributions are proportional hazard rate models (see Navarro and Sarabia, 2011), Sarmanov family and Ali-Mikhail-Haq family of bivariate distributions.  We also study the relationship between local dependence function an...

  1. On Bivariate Exponentiated Extended Weibull Family of Distributions

    OpenAIRE

    Roozegar, Rasool; Jafari, Ali Akbar

    2015-01-01

    In this paper, we introduce a new class of bivariate distributions called the bivariate exponentiated extended Weibull distributions. The model introduced here is of Marshall-Olkin type. This new class of bivariate distributions contains several bivariate lifetime models. Some mathematical properties of the new class of distributions are studied. We provide the joint and conditional density functions, the joint cumulative distribution function and the joint survival function. Special bivariat...

  2. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  3. Determination of error measurement by means of the basic magnetization curve

    Science.gov (United States)

    Lankin, M. V.; Lankin, A. M.

    2016-04-01

    The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.

  4. Farm Level Nonparametric Analysis of Profit Maximization Behavior with Measurement Error

    OpenAIRE

    Zereyesus, Yacob Abrehe; Allen M. Featherstone; Langemeier, Michael R.

    2009-01-01

    This paper tests the farm level profit maximization hypothesis using a nonparametric production analysis approach allowing for measurement error in the input and output variables. All farms violated Varian’s deterministic Weak Axiom of Profit Maximization (WAPM). The magnitude of minimum critical standard errors required for consistency with profit maximization, convex technology production was smaller after allowing technological change during the sample period. Results indicate strong suppo...

  5. IDENTIFICATION AND CORRECTION OF COORDINATE MEASURING MACHINE GEOMETRICAL ERRORS USING LASERTRACER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2013-12-01

    Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.

  6. A newly conceived cylinder measuring machine and methods that eliminate the spindle errors

    International Nuclear Information System (INIS)

    Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method. (paper)

  7. Bounds for Trivariate Copulas with Given Bivariate Marginals

    OpenAIRE

    Quesada-Molina JoséJuan; Durante Fabrizio; Klement ErichPeter

    2008-01-01

    Abstract We determine two constructions that, starting with two bivariate copulas, give rise to new bivariate and trivariate copulas, respectively. These constructions are used to determine pointwise upper and lower bounds for the class of all trivariate copulas with given bivariate marginals.

  8. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    Science.gov (United States)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  9. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    Science.gov (United States)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  10. Estimation of bias errors in angle-of-arrival measurements using platform motion

    Science.gov (United States)

    Grindlay, A.

    1981-08-01

    An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.

  11. [The experimental errors in measuring the size of bones in mice].

    Science.gov (United States)

    Shimizu, H; Awata, T

    1982-10-01

    An experiment was conducted with 236 mice (3-9 weeks of age) to determine the bias in taking soft-X-ray photograph, and systematic and random errors of the values measured with a picture analyzer. Sites measured are length of scapula (SCAL), humerus (HUML), ulna (ULNL), coxae (COXL), femur (FEML), tibia (TIBL), thoracic vertebrae (VTL), lumbar vertebrae (VLL) and sacral vertebrae (VSL), and width of scapula (SCAW) and coxae (COXW). Systematic bias peculiar to the procedure was found in the values of bones measured. The X-ray photograph caused the downward bias to the length of sacral vertebrae alone, but did not to others. The standard errors of measurements (squared root of error variance) with picture analyzer ranged between 0.12 and 0.24 mm and had no apparent relationship to the size of bone. PMID:7169086

  12. Research of measurement errors caused by salt solution temperature drift in surface plasmon resonance sensors

    Institute of Scientific and Technical Information of China (English)

    Yingcai Wu; Zhengtian Gu; YifangYuan

    2006-01-01

    @@ Influence of temperature on measurement of surface plasmon resonance (SPR) sensor was investigated.Samples with various concentrations of NaCI were tested at different temperatures. It was shown that if the affection of temperature could be neglected, measurement precision of salt solution was 0.028 wt.-%.But measurement error of salinity caused by temperature was 0.53 wt.-% in average when the temperature drift was 1 ℃. To reduce the error, a double-cell SPR sensor with salt solution and distilled water flowing respectively and at the same temperature was implemented.

  13. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  14. Correlated biomarker measurement error: an important threat to inference in environmental epidemiology.

    Science.gov (United States)

    Pollack, A Z; Perkins, N J; Mumford, S L; Ye, A; Schisterman, E F

    2013-01-01

    Utilizing multiple biomarkers is increasingly common in epidemiology. However, the combined impact of correlated exposure measurement error, unmeasured confounding, interaction, and limits of detection (LODs) on inference for multiple biomarkers is unknown. We conducted data-driven simulations evaluating bias from correlated measurement error with varying reliability coefficients (R), odds ratios (ORs), levels of correlation between exposures and error, LODs, and interactions. Blood cadmium and lead levels in relation to anovulation served as the motivating example, based on findings from the BioCycle Study (2005-2007). For most scenarios, main-effect estimates for cadmium and lead with increasing levels of positively correlated measurement error created increasing downward or upward bias for OR > 1.00 and OR < 1.00, respectively, that was also a function of effect size. Some scenarios showed bias for cadmium away from the null. Results subject to LODs were similar. Bias for main and interaction effects ranged from -130% to 36% and from -144% to 84%, respectively. A closed-form continuous outcome case solution provides a useful tool for estimating the bias in logistic regression. Investigators should consider how measurement error and LODs may bias findings when examining biomarkers measured in the same medium, prepared with the same process, or analyzed using the same method. PMID:23221725

  15. Measurement Error and the Analysis of Panel Data. Studies of Educative Processes Report No. 5.

    Science.gov (United States)

    Wiley, David E.; Hornik, Robert

    Early procedures for the analysis of multivariate panel data do not rest on well-specified statistical models. Recent approaches based on path analysis suffer from the defects of variable standardization and lack of attention to measurement error. The paper formulates a measurement model for quantitatively scaled multivariate panel data. The model…

  16. Bivariate copula in fitting rainfall data

    Science.gov (United States)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  17. Reliability for some bivariate beta distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate beta. The calculations involve the use of special functions.

  18. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard;

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...... deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors....

  19. A note on finding peakedness in bivariate normal distribution using Mathematica

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2007-07-01

    Full Text Available Peakedness measures the concentration around the central value. A classical standard measure of peakedness is kurtosis which is the degree of peakedness of a probability distribution. In view of inconsistency of kurtosis in measuring of the peakedness of a distribution, Horn (1983 proposed a measure of peakedness for symmetrically unimodal distributions. The objective of this paper is two-fold. First, Horn’s method has been extended for bivariate normal distribution. Secondly, to show that computer algebra system Mathematica can be extremely useful tool for all sorts of computation related to bivariate normal distribution. Mathematica programs are also provided.

  20. Copula-based bivariate binary response models

    OpenAIRE

    Winkelmann, Rainer

    2009-01-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor on a binary outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence among the two variables using copulas. Simulation results and evidence from two applications, one on the effect of insurance status on ambulatory expenditure and one on the effect of completing high school on sub...

  1. Stability of Bivariate GWAS Biomarker Detection

    OpenAIRE

    Bedő, Justin; Rawlinson, David; Goudey, Benjamin; Ong, Cheng Soon

    2014-01-01

    Given the difficulty and effort required to confirm candidate causal SNPs detected in genome-wide association studies (GWAS), there is no practical way to definitively filter false positives. Recent advances in algorithmics and statistics have enabled repeated exhaustive search for bivariate features in a practical amount of time using standard computational resources, allowing us to use cross-validation to evaluate the stability. We performed 10 trials of 2-fold cross-validation of exhaustiv...

  2. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  3. Objective priors for the bivariate normal model

    OpenAIRE

    Berger, James O.; Sun, Dongchu

    2008-01-01

    Study of the bivariate normal distribution raises the full range of issues involving objective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial) and the criteria involved in deciding on optimal objective priors (e.g., ease of computation, frequentist performance, marginalization paradoxes). Summary recommendations as to optimal objective priors are mad...

  4. Results on Differential and Dependent Measurement Error of the Exposure and the Outcome Using Signed Directed Acyclic Graphs

    OpenAIRE

    VanderWeele, Tyler J; Hernán, Miguel A.

    2012-01-01

    Measurement error in both the exposure and the outcome is a common problem in epidemiologic studies. Measurement errors in the exposure and the outcome are said to be independent of each other if the measured exposure and the measured outcome are statistically independent conditional on the true exposure and true outcome (and dependent otherwise). Measurement error is said to be nondifferential if measurement of the exposure does not depend on the true outcome conditional on the true exposure...

  5. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  6. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  7. The effect of genotyping errors on the robustness of composite linkage disequilibrium measures

    Indian Academy of Sciences (India)

    Yu Mei Li; Yang Xiang

    2011-12-01

    We conclude that composite linkage disequilibrium (LD) measures be adopted in population-based LD mapping or association mapping studies since it is unaffected by Hardy–Weinberg disequilibrium. Although some properties of composite LD measures have been recently studied, the effects of genotyping errors on composite LD measures have not been examined. In this report, we derived deterministic formulas to evaluate the impact of genotyping errors on the composite LD measures $\\Delta'_{AB}$ and $r_{AB}$, and compared the robustness of $\\Delta'_{AB}$ and $r_{AB}$ in the presence of genotyping errors. The results showed that $\\Delta'_{AB}$ and $r_{AB}$ depend on the allele frequencies and the assumed error model, and show varying degrees of robustness in the presence of errors. In general, whether there is HWD or not, $r_{AB}$ is more robust than $\\Delta'_{AB}$ except some special cases and the difference of robustness between $\\Delta'_{AB}$ and $r_{AB}$ becomes less severe as the difference between the frequencies of two SNP alleles and becomes smaller.

  8. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  9. The Influence of Training Phase on Error of Measurement in Jump Performance.

    Science.gov (United States)

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed. PMID:26181005

  10. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    Science.gov (United States)

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  11. What does variation on household survey methods reveal about the nature of measurement errors in consumption estimates?

    OpenAIRE

    Gibson, John; BEEGLE, KATHLEEN; Weerdt, Joachim De; Friedman, Jed

    2015-01-01

    We randomly assigned eight different consumption surveys to obtain evidence on the nature of measurement errors in estimates of household consumption. Regressions using data from more error-prone designs are compared with results from a “gold standard” survey. Measurement errors appear to have a mean-reverting negative correlation with true consumption, especially for food and especially for rural households.

  12. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  13. A MEASURING SYSTEM WITH AN ADDITIONAL CHANNEL FOR ELIMINATING THE DYNAMIC ERROR

    Directory of Open Access Journals (Sweden)

    Dichev Dimitar

    2014-03-01

    Full Text Available The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.

  14. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Energy Technology Data Exchange (ETDEWEB)

    DeSalvo, Riccardo, E-mail: Riccardo.desalvo@gmail.com [California State University, Northridge, 18111 Nordhoff Street, Northridge, CA 91330-8332 (United States); University of Sannio, Corso Garibaldi 107, Benevento 82100 (Italy)

    2015-06-26

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G.

  15. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    Science.gov (United States)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  16. The Analysis of Measurement Errors as Outlined in GUM and in the IAEA Statistical Methodologies for Safeguards: a Comparison

    International Nuclear Information System (INIS)

    We compare the definitions and propagation of measurement errors as outlined in GUM (Guide to the expression of Uncertainty in Measurement) and in the IAEA statistical methodologies for safeguards. Measurement errors are not observable. Based on a correct mode of error propagation, we can estimate the variances of measurement errors. In order to do so, we have to first define a mathematical measurement error model. Based on this model, we can then carry out propagation of errors with the aim to determine realistic estimates of the variance of measurement errors. For illustration purposes, we use the mathematical error model describing the measurement errors associated with a linear calibration. We can demonstrate that the mathematical error model for any calibration, which always consists of a random and systematic component, is subsumed in the mathematical error model used in the IAEA statistical methodology for safeguards. The goal of this paper is to describe the mode of propagation of measurement errors as outlined in GUM and in the IAEA statistical methodology for safeguards and to compare the mathematical error model used for a linear calibration with the model used for the evaluation of paired data. Paired data are obtained by measuring the same item with two different measurement methods and are used by the IAEA to estimate the measurement error variances of plant operators and inspectors in order to inform the material balance evaluation (MBE) process. Adequate methods of error propagation are of paramount importance to draw soundly based conclusions from material balance evaluation at bulk-handling facilities. (author)

  17. Error rates of Belavkin weighted quantum measurements and a converse to Holevo's asymptotic optimality theorem

    OpenAIRE

    Tyson, Jon

    2009-01-01

    We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a we...

  18. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  19. Analysis of error in measurement and resolution of electronic speckle photography in material testing

    International Nuclear Information System (INIS)

    Causes and magnitude of error in measurement and resolution are investigated for electronic speckle photography (ESP), which is used like a strain gauge in material testing. For this purpose a model of the rough surface which allows the description of cross correlation of speckle images under the influence of material strain is developed. The process through which material strain leads to decorrelation of speckle images is shown. The error in measurement which is caused by defocused imaging and statistical errors in the displacement estimation of speckle images is investigated theoretically. The results are supported by simulations and experiments. Moreover the resolution of ESP can be improved through increased optical magnification as well as adjusted aperture. Resolutions which are usually considered to be accessible only to interferometric techniques are achieved. (author)

  20. Analysis of error in measurement and resolution of electronic speckle photography in material testing

    CERN Document Server

    Feiel, R

    1999-01-01

    Causes and magnitude of error in measurement and resolution are investigated for electronic speckle photography (ESP), which is used like a strain gauge in material testing. For this purpose a model of the rough surface which allows the description of cross correlation of speckle images under the influence of material strain is developed. The process through which material strain leads to decorrelation of speckle images is shown. The error in measurement which is caused by defocused imaging and statistical errors in the displacement estimation of speckle images is investigated theoretically. The results are supported by simulations and experiments. Moreover the resolution of ESP can be improved through increased optical magnification as well as adjusted aperture. Resolutions which are usually considered to be accessible only to interferometric techniques are achieved.

  1. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  2. The glazing temperature measurement in solar stills - Errors and implications on performance evaluation

    International Nuclear Information System (INIS)

    Highlights: → We investigate parameters affecting glazing surface temperature measurement accuracy. → Its degradation is mainly attributed to the radiation absorption at the sensor bead. → It is also attributed to poor sensor bond conductance phenomena. → An order of magnitude error is derived from a series of field measurements. → Conditions are specified to minimize error implications on performance evaluations. -- Abstract: Although the brine temperature measurement in solar stills would be more or less conventional, the precise condensing surface temperature measurement, appears to present a significant difficulty to measure. In the present investigation, an analysis is developed aiming to underline the parameters affecting the glazing surface temperature measurement and a series of field measurements are presented, aiming to identify and evaluate the errors associated to the measurement of this crucial physical quantity. It is derived that among other reasons the surface temperature measurement accuracy may strongly be degraded owing to the sensor tip overheating due to the radiation absorption, as well as to the development of poor bond conductance between sensor bead and glazing surface. It may also be degraded owing to the temperature drop across the glazing thickness and the non-uniform temperature distribution over the entire condensing surface area, something that makes the selection of the appropriate location of the particular temperature transducer necessary. Based on the derived measurements, an order of magnitude analysis is employed for the approximate evaluation of error range in the condensing surface temperature measurement. This, which depending on specific conditions was found to vary between about 1 and 2 oC, was employed to demonstrate the implications and approximate conditions under which its effect could become excessively high in ordinary solar still investigations.

  3. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    Science.gov (United States)

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  4. Errors in the measurement of non-Gaussian noise spectra using rf spectrum analyzers

    International Nuclear Information System (INIS)

    We discuss the nature of errors which may occur when the spectra of random signals not obeying Gaussian statistics are measured with typical rf spectrum analyzers. These errors depend on both the noise statistics and the process used to detect the random signal after it has been passed through a narrow bandpass filter within the spectrum analyzer. In general, for random signals not obeying Gaussian statistics, the output of the bandpass filter must be measured with a square law detector if the resulting measurement is to be strictly proportional to the power spectrum of the input signal. We compare measurements of the power spectra of non-Gaussian noise using a commerical spectrum analyzer with its resident envelope detector, with measurements by the same analyzer fitted with a square law detector. Differences of about 5% were observed

  5. Error Vector Magnitude (EVM) Measurement to Characterize Tracking and Data Relay Satellite (TDRS) Channel Impairment

    Science.gov (United States)

    Mebratu, Derssie; Kegege, Obadiah; Shaw, Harry

    2016-01-01

    Digital signal transmits via a carrier wave, demodulates at a receiver and locates an ideal constellation position. However, a noise distortion, carrier leakage and phase noise divert an actual constellation position of a signal and locate to a new position. In order to assess a source of noise and carrier leakage, Bit Error Rate (BER) measurement technique is also used to evaluate the number of erroneous bit per bit transmitted signal. In addition, we present, Error Vector Magnitude (EVM), which measures an ideal and a new position, assesses a source of signal distortion, and evaluates a wireless communication system's performance with a single metric. Applying EVM technique, we also measure the performance of a User Services Subsystem Component Replacement (USSCR) modem. Furthermore, we propose EVM measurement technique in the Tracking and Data Relay Satellite system (TDRS) to measure and evaluate a channel impairment between a ground (transmitter) and the terminal (receiver) at White Sands Complex.

  6. Error analysis in reactor-core neutron beam density measurements by gold-foil activation

    Energy Technology Data Exchange (ETDEWEB)

    Prokof' ev, Y.A.; Bondarenko, L.N.; Rogok, E.V.; Spivak, P.E.

    1986-09-01

    The most accurate method for neutron density measurements, where the spectrum cut-off energy is appreciably lower than the gold cross-section resonance energy, is by gold-foil activation. The authors show that this method also makes it possible to measure core-beam neutron densities with high accuracy, even though this requires taking into account the gold-activation contribution of epithermal neutrons from 3.10/sup 4 -/b neutron capture at 4.8 eV and inserting the appropriate corrections in the measurement results. The activation method was experimentally used for precision measurement of the reactor-core beam density in the study of the beam neutron half-life. Data are presented which show that the additive error is within the +/-0.5 measurement error.

  7. Point cloud uncertainty analysis for laser radar measurement system based on error ellipsoid model

    Science.gov (United States)

    Zhengchun, Du; Zhaoyong, Wu; Jianguo, Yang

    2016-04-01

    Three-dimensional laser scanning has become an increasingly popular measurement method in industrial fields as it provides a non-contact means of measuring large objects, whereas the conventional methods are contact-based. However, the data acquisition process is subject to many interference factors, which inevitably cause errors. Therefore, it is necessary to precisely evaluate the accuracy of the measurement results. In this study, an error-ellipsoid-based uncertainty model was applied to 3D laser radar measurement system (LRMS) data. First, a spatial point uncertainty distribution map was constructed according to the error ellipsoid attributes. The single-point uncertainty ellipsoid model was then extended to point-point, point-plane, and plane-plane situations, and the corresponding distance uncertainty models were derived. Finally, verification experiments were performed by using an LRMS to measure the height of a cubic object, and the measurement accuracies were evaluated. The results show that the plane-plane distance uncertainties determined based on the ellipsoid model are comparable to those obtained by actual distance measurements. Thus, this model offers solid theoretical support to enable further LRMS measurement accuracy improvement.

  8. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe;

    2003-01-01

    exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  9. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...

  10. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    Science.gov (United States)

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  11. Fast error simulation of optical 3D measurements at translucent objects

    Science.gov (United States)

    Lutzke, P.; Kühmstedt, P.; Notni, G.

    2012-09-01

    The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.

  12. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander;

    2013-01-01

    variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...

  13. Investigation of nonlinearity as an error source in strain gauge measurements of high elongations, and comparison with other measuring methods

    International Nuclear Information System (INIS)

    High elongation measurement using strain gauges presents problems with regard to accuracy of results, emanating on the one hand from the measuring technique applied (bridge linearity at constant current or constant voltage), or from the strain gauge itself (k factor). Error correction has to take into account all parameters influencing the electric signal, as certain effects are opposite in their signs. The maximum deviations of the elongations measured by the various measuring devices in comparison with true elongation vary with the measuring technique applied, and within the elongation range investigated (0-0.1 m/m) may reach a maximum between 1 p.c. and 11 p.c.. Measurements with equipment using constant current or constant voltage supply have shown to be also appropriate in the high elongation range, if their specific errors within ≤ p.c. are duly corrected. (orig.)

  14. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the...... risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study...

  15. Thin film thickness measurement error reduction by wavelength selection in spectrophotometry

    International Nuclear Information System (INIS)

    Fast and accurate volumetric profilometry of thin film structures is an important problem in the electronic visual display industry. We propose to use spectrophotometry with a limited number of working wavelengths to achieve high-speed control and an approach to selecting the optimal working wavelengths to reduce the thickness measurement error. A simple expression for error estimation is presented and tested using a Monte Carlo simulation. The experimental setup is designed to confirm the stability of film thickness determination using a limited number of wavelengths

  16. Improved error separation technique for on-machine optical lens measurement

    Science.gov (United States)

    Fu, Xingyu; Bing, Guo; Zhao, Qingliang; Rao, Zhimin; Cheng, Kai; Mulenga, Kabwe

    2016-04-01

    This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.

  17. Estimation of random errors in respiratory resistance and reactance measured by the forced oscillation technique.

    Science.gov (United States)

    Farré, R; Rotger, M; Navajas, D

    1997-03-01

    The forced oscillation technique (FOT) allows the measurement of respiratory resistance (Rrs) and reactance (Xrs) and their associated coherence (gamma2). To avoid unreliable data, it is usual to reject Rrs and Xrs measurements with a gamma2 measurement. To this end, we developed theoretical equations for the variances and covariances of the pressure and flow auto- and cross-spectra used to compute Rrs and Xrs. Random errors of Rrs and Xrs were found to depend on the values of Rrs and Xrs, and to be proportional to ((1-gamma2)/(2 x N x gamma2))1/2. Reliable Rrs and Xrs data can be obtained in measurements with low gamma2 by enlarging the data recording (i.e. N). Therefore, the error equations derived may be useful to extend the frequency band of the forced oscillation technique to frequencies lower than usual, characterized by low coherence. PMID:9073006

  18. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  19. Reliability for some bivariate beta distributions

    OpenAIRE

    Nadarajah Saralees

    2005-01-01

    In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and ...

  20. On colorings of bivariate random sequences

    Czech Academy of Sciences Publication Activity Database

    Matúš, František; Kupsa, Michal

    Piscataway: IEEE, 2010, s. 1272-1276. ISBN 978-1-4244-7892-7. [IEEE International Symposium on Information Theory 2010. Austin (US), 13.06.2010-18.06.2010] R&D Projects: GA AV ČR IAA100750603; GA AV ČR KJB100750901; GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : colorings * ergodic sequences * entropy rate * asymptotic equipartition property Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2010/MTR/matus-on colorings of bivariate random sequences.pdf

  1. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    Energy Technology Data Exchange (ETDEWEB)

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  2. Measurement and simulation of clock errors from resource-constrained embedded systems

    International Nuclear Information System (INIS)

    Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10−6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range

  3. Transient heat transfer measurements using thermochromic liquid crystal: lateral-conduction error

    International Nuclear Information System (INIS)

    Thermochromic liquid crystal (TLC) can be used to measure the surface temperature in transient heat transfer experiments. Knowing the time at which the TLC changes colour, hence knowing the surface temperature at that time, it is possible to calculate the heat transfer coefficient, h, and the analytical one-dimensional solution of Fourier's conduction equation for a semi-infinite wall is often used for this purpose. However, the 1D solution disregards lateral variations of the surface temperature (that is, those variations parallel to the surface), which can cause a bias, or lateral-conduction error, in the calculated value of h. This paper shows how the 1D analytical solution can be used to estimate, and to provide a correction for, the error. An approximate two-dimensional analysis (which could be readily extended to three dimensions) is used to calculate the error, and a 2D finite-difference solution of Fourier's equation is used to validate the method

  4. A correction for emittance-measurement errors caused by finite slit and collector widths

    International Nuclear Information System (INIS)

    One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs

  5. The Total Errors In Measuring Epeak for Gamma-Ray Bursts

    CERN Document Server

    Collazzi, Andrew C; Moree, Jeremy A

    2011-01-01

    While Epeak has been extensively used in the past, for example with luminosity indicators, it has not been thoroughly examined for possible sources of scatter. In the literature, the reported error bars for Epeak are the simple Poisson statistical errors. Additional uncertainties arise due to the choices made by analysts in determining Epeak (e.g., the start and stop times of integration), imperfect knowledge of the response of the detector, different energy ranges for various detectors, and differences in models used to fit the spectra. We examine the size of these individual sources of scatter by comparing many independent pairs of published Epeak values for the same bursts. Indeed, the observed scatter in multiple reports of the same burst (often with the same data) is greatly larger than the published statistical error bars. We measure that the one-sigma uncertainty associated with the analyst's choices is 28%, i.e., 0.12 in Log10(Epeak), with the resultant errors always being present. The errors associat...

  6. The technical method of geometric error measurement for multi-axis NC machine tool by laser tracker

    International Nuclear Information System (INIS)

    The machining accuracy is an important index to evaluate the machine tool characteristics, and error measurement and compensation are effective methods to improve the machining accuracy with low cost. Error measurement is a prerequisite and basis for error compensation, so how to quickly and accurately measure the machine error is particularly important. To quickly and accurately detect the geometric error of multi-axis NC machine tool, a new method with laser tracker based on the sequential multi-lateration measurement principle is proposed in the paper. The laser tracker is used to measure the same motion trajectory of the linear axis and rotary axis of machine tool at different base stations. Based on the GPS principle, the space coordinates of each measuring point can be determined by large amount of measured data. Then according to the error model of the linear axis and rotary axis, each error can be separated. In the paper, the principles of sequential multi-lateration measurement are analyzed in depth. By establishing the mathematical model of sequential multi-lateration measurement for the linear axis and rotary axis, the algorithms for linear axis measurement and rotary axis measurement are derived and proved feasible by simulations, respectively. Meanwhile, the error separation algorithm is also deduced. The results of experiments show that this method can achieve quick and accurate detection of errors in multi-axis NC machine tool. (paper)

  7. A bivariate luminosity model for GRB pulses and flares

    International Nuclear Information System (INIS)

    We have fitted the complete Swift BAT and XRT light curves of 88 GRBs for which we have a redshift with a total of 331 pulses. For each GRB we also include an afterglow component to fit the plateau phase and the late decay seen in the XRT data. The combination of pulses and afterglow model all the emission detected, prompt plus afterglow, including late X-ray flares detected only in the XRT. Each pulse is described by a simple physical model which includes the spectrum at peak and the temporal characteristics of the pulse. We find that the pulse peak luminosity is correlated with both the mean photon energy in the bolometric band of the pulse spectrum at the peak, referred to as Ezbol, and the temporal parameter Tzf which is a measure of the pulse width. An empirical bivariate luminosity model set up with these parameters provides a good fit to the pulse luminosity. The analysis indicates that prompt pulses and X-ray flares are one and the same and arise from the same physical process and this physical process is responsible for the bivariate nature of the luminosity.

  8. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-01-01

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  9. An Assessment of Errors and Their Reduction in Terrestrial Laser Scanner Measurements in Marmorean Surfaces

    Science.gov (United States)

    Garcia-Fernandez, Jorge

    2016-03-01

    The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.

  10. Measurement error of spiral CT volumetry: influence of low dose CT technique

    International Nuclear Information System (INIS)

    To examine the possible measurement errors of lung nodule volumetry at the various scan parameters by using a small nodule phantom. We obtained images of a nodule phantom using a spiral CT scanner. The nodule phantom was made of paraffin and urethane and its real volume was known. For the CT scanning experiments, we used three different values for both the pitch of the table feed, i.e. 1:1, 1:15 and 1:2, and the tube current, i.e. 40 mA, 80 mA and 120 mA. All of the images acquired through CT scanning were reconstructed three dimensionally and measured with volumetry software. We tested the correlation between the true volume and the measured volume for each set of parameters using linear regression analysis. For the pitches of table feed of 1:1, 1:1.5 and 1:2, the mean relative errors were 23.3%, 22.8% and 22.6%, respectively. There were perfect correlations among the three sets of measurements (Pearson's coefficient = 1.000, p< 0.001). For the tube currents of 40 mA, 80 mA and 120 mA, the mean relative errors were 22.6%, 22.6% and 22.9%, respectively. There were perfect correlations among them (Pearson's coefficient=1.000, p<0.001). In the measurement of the volume of the lung nodule using spiral CT, the measurement error was not increased in spite of the tube current being decreased or the pitch of table feed being increased

  11. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  12. Influence of sky radiance measurement errors on inversion-retrieved aerosol properties

    Energy Technology Data Exchange (ETDEWEB)

    Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de [Atmospheric Optics Group (GOA), University of Valladolid, Valladolid (Spain); Berjon, A. J. [Izana Atmospheric Research Center, Meteorological State Agency of Spain (AEMET), Sta. Cruz de Tenerife (Spain); Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L. [Laboratory of Atmospheric Optics, Universite Lille 1, Villeneuve d' Ascq (France)

    2013-05-10

    Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.

  13. SANG-a kernel density estimator incorporating information about the measurement error

    Science.gov (United States)

    Hayes, Robert

    Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.

  14. Influence of sky radiance measurement errors on inversion-retrieved aerosol properties

    International Nuclear Information System (INIS)

    Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90’s, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2° in pointing and 1.2° field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.

  15. Uncertainty in Power Law Analysis: Influences of Sample Size, Measurement Error, and Analysis Methods

    Science.gov (United States)

    Hui, D.; Luo, Y.; Jackson, R. B.

    2005-12-01

    A power function, Y=Y0 Mbeta, can be used to describe the relationship of physiological variables with body size over a wide range of scales, typically many orders of magnitude. One of the key issues in the renewed power law debate is whether the allometric scaling exponent β equals 3/4 or 2/3. The analysis could be remarkably affected by sampling size, measurement error, and analysis methods, but these effects have not been explored systematically. We investigated the influences of these three factors based on a data set of 626 pairs of base metabolic rate and mass in mammals with the calculated β=0.711. Influence of sampling error was tested by re-sampling with different sample sizes using a Monte Carlo approach. Results showed that estimated parameter b varied considerably from sample to sample. For example, when the sample size was n=63, b varied from 0.582 to 0.776. Even though the original data set did not support either β=3/4 or β=2/3, we found that 39.0% of the samples supported β=2/3, 35.4% of the samples supported β=3/4. Influence of measurement error on parameter estimations was also tested using Bayesian theory. Virtual data sets were created using the mass in the above-mentioned data set, with given parameters α and β (β=2/3 or β=3/4) and certain measurement error in base metabolic rate and/or mass. Results showed that as measurement error increased, more estimated bs were found to be significantly different from the parameter β. When measurement error (i.e., standard deviation) was 20% and 40% of the measured mass and base metabolic rate, 15.4% and 14.6% of the virtual data sets were found to be significant different from the parameter β=3/4 and β=2/3, respectively. Influence of different analysis methods on parameter estimations was also demonstrated using the original data set and the pros and cons of these methods were further discussed. We urged cautions in interpreting the power law analysis, especially from a small data sample, and

  16. A new family of bivariate discrete distributions on Z^2

    OpenAIRE

    Chesneau, Christophe; Kachour, Maher

    2015-01-01

    In this paper we introduce a new family of bivariate discrete distributions on Z 2 , called the Rademacher(α 1 , α 2) − N 2 class. Its main feature is to generate bivariate random variables with possible negative or positive values for the covariance. This new family can also be considered as an extension (on Z^2) of some standard bivariate discrete (non negative valued) distribution.

  17. Bivariate Count Data Regression Using Series Expansions: With Applications

    OpenAIRE

    A. Colin Cameron; Per Johansson

    2004-01-01

    Most research on count data regression models, i.e. models for there the dependent variable takes only non-negative integer values or count values, has focused on the univariate case. Very little attention has been given to joint modeling of two or more counts. We propose parametric regression models for bivariate counts based on squared polynomial expansions around a baseline density. The models are more flexible than the current leading bivariate count model, the bivariate Poisson. The mode...

  18. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  19. An experimental five-sensor system for measuring straightness and yawing motion errors of a linear slide

    International Nuclear Information System (INIS)

    In this study, a measurement system consisting of five distance sensors is designed and built to on-machine determine the straightness and yawing motion errors of a linear slide. The procedure to implement the proposed error separation method [1] is first discussed. The method can separate the yawing motion error from the straightness motion without knowing the slide profile beforehand. The experimental set-up and the testing procedure are then described. To verify the capability of this measurement system, a series of computer simulation and measurement experiments are undertaken. Comparison of the results with those obtained by a laser interferometer is also made to evaluate the system performance. Two error sources, i.e. sensor gain error and sensor placement error, which cause measurement uncertainty, are also studied by computer simulations. Testing results confirm that the measurement system can be used to determine the straightness motion with accuracy down to 1 µm. Good reproducibility of the profile results is also obtained

  20. An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis

    Science.gov (United States)

    Wenger, David Paul

    1991-01-01

    The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.

  1. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    Science.gov (United States)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  2. Minimum-Energy Bivariate Wavelet Frame with Arbitrary Dilation Matrix

    Directory of Open Access Journals (Sweden)

    Fengjuan Zhu

    2013-01-01

    Full Text Available In order to characterize the bivariate signals, minimum-energy bivariate wavelet frames with arbitrary dilation matrix are studied, which are based on superiority of the minimum-energy frame and the significant properties of bivariate wavelet. Firstly, the concept of minimum-energy bivariate wavelet frame is defined, and its equivalent characterizations and a necessary condition are presented. Secondly, based on polyphase form of symbol functions of scaling function and wavelet function, two sufficient conditions and an explicit constructed method are given. Finally, the decomposition algorithm, reconstruction algorithm, and numerical examples are designed.

  3. The asymptotic distribution of maxima in bivariate samples

    Science.gov (United States)

    Campbell, J. W.; Tsokos, C. P.

    1973-01-01

    The joint distribution (as n tends to infinity) of the maxima of a sample of n independent observations of a bivariate random variable (X,Y) is studied. A method is developed for deriving the asymptotic distribution of the maxima, assuming that X and Y possess asymptotic extreme-value distributions and that the probability element dF(x,y) can be expanded in a canonical series. Applied both to the bivariate normal distribution and to the bivariate gamma and compound correlated bivariate Poisson distributions, the method shows that maxima from all these distributions are asymptotically uncorrelated.

  4. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    thicknesses between 2 and 4.3 mm. The thermocouples were accurately placed at the same distance from the surface of the casting for different plate thicknesses. It is shown that when measuring the temperature in plates with thickness between 2 and 4.3 mm the measured temperature will be parallel shifted to a...... level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained in...

  5. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  6. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  7. Theory confronts experiment in the Casimir force measurements: Quantification of errors and precision

    International Nuclear Information System (INIS)

    We compare theory and experiment in the Casimir force measurement between gold surfaces performed with the atomic force microscope. Both random and systematic experimental errors are found leading to a total absolute error equal to 8.5 pN at 95% confidence. In terms of the relative errors, experimental precision of 1.75% is obtained at the shortest separation of 62 nm at 95% confidence level (at 60% confidence the experimental precision of 1% is confirmed at the shortest separation). An independent determination of the accuracies of the theoretical calculations of the Casimir force and its application to the experimental configuration is carefully made. Special attention is paid to the sample-dependent variations of the optical tabulated data due to the presence of grains, contribution of surface plasmons, and errors introduced by the use of the proximity force theorem. Nonmultiplicative and diffraction-type contributions to the surface roughness corrections are examined. The electric forces due to patch potentials resulting from the polycrystalline nature of the gold films are estimated. The finite size and thermal effects are found to be negligible. The theoretical accuracy of about 1.69% and 1.1% are found at a separation 62 nm and 200 nm, respectively. Within the limits of experimental and theoretical errors very good agreement between experiment and theory is confirmed characterized by the root-mean-square deviation of about 3.5 pN within all measurement range. The conclusion is made that the Casimir force is stable relative to variations of the sample-dependent optical and electric properties, which opens new opportunities to use the Casimir effect for diagnostic purposes

  8. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  9. Measurement error analysis of the 3D four-wheel aligner

    Science.gov (United States)

    Zhao, Qiancheng; Yang, Tianlong; Huang, Dongzhao; Ding, Xun

    2013-10-01

    Positioning parameters of four-wheel have significant effects on maneuverabilities, securities and energy saving abilities of automobiles. Aiming at this issue, the error factors of 3D four-wheel aligner, which exist in extracting image feature points, calibrating internal and exeternal parameters of cameras, calculating positional parameters and measuring target pose, are analyzed respectively based on the elaborations of structure and measurement principle of 3D four-wheel aligner, as well as toe-in and camber of four-wheel, kingpin inclination and caster, and other major positional parameters. After that, some technical solutions are proposed for reducing the above error factors, and on this basis, a new type of aligner is developed and marketed, it's highly estimated among customers because the technical indicators meet requirements well.

  10. Poverty traps and nonlinear income dynamics with measurement error and individual heterogeneity

    OpenAIRE

    Antman, Francisca; McKenzie, David J.

    2005-01-01

    Theories of poverty traps stand in sharp contrast to the view that anybody can make it through hard work and thrift. However, empirical detection of poverty traps is complicated by the lack of long panels, measurement error, and attrition. This paper shows how dynamic pseudo-panel methods can overcome these difficulties, allowing estimation of non-linear income dynamics and testing for the presence of poverty traps. The paper explicitly allows for individual heterogeneity in income dynamics t...

  11. Measurement errors in multifrequency bioelectrical impedance analyzers with and without impedance electrode mismatch.

    OpenAIRE

    Bogónez Franco, Francisco; Nescolarde Selva, Lexa Digna; Bragós Bardia, Ramon; Rosell Ferrer, Francisco Javier; Yandiola, Iñigo

    2009-01-01

    The purpose of this study is to compare measurement errors in two commercially available multi-frequency bioimpedance analyzers, a Xitron 4000B and an ImpediMed SFB7, including electrode impedance mismatch. The comparison was made using resistive electrical models and in ten human volunteers. We used three different electrical models simulating three different body segments: the right-side, leg and thorax. In the electrical models, we tested the effect of the capacitive coupling of the ...

  12. Optimal sparse volatility matrix estimation for high-dimensional Itô processes with measurement errors

    OpenAIRE

    Tao, Minjing; Wang, Yazhen; Harrison H. Zhou

    2013-01-01

    Stochastic processes are often used to model complex scientific problems in fields ranging from biology and finance to engineering and physical science. This paper investigates rate-optimal estimation of the volatility matrix of a high-dimensional It\\^{o} process observed with measurement errors at discrete time points. The minimax rate of convergence is established for estimating sparse volatility matrices. By combining the multi-scale and threshold approaches we construct a volatility matri...

  13. Tax evasion and measurement error: An econometric analysis of survey data linked with tax records

    OpenAIRE

    Paulus, Alari

    2015-01-01

    We use income survey data linked with tax records at the individual level for Estonia to estimate the determinants and extent of income tax compliance in a novel way. Unlike earlier studies attributing income discrepancies between such data sources either to tax evasion or survey measurement error, we model these processes jointly. Focussing on employment income, the key identifying assumption made is that people working in public sector cannot evade taxes. The results indicate a number of so...

  14. Height curves based on the bivariate Power-Normal and the bivariate Johnson’s System bounded distribution

    OpenAIRE

    Mønness, Erik Neslein

    2013-01-01

    English: Often, a forest stand is modeled with a diameter distribution and a height curve as somehow separate tasks. A bivariate height and diameter distribution yield a unified model of a forest stand. The conditional median height given the diameter is a possible height curve. Here the bivariate Johnson’s System bounded distribution and the bivariate power-normal distribution are evaluated and compared with a simple hyperbolic height curve. Evaluated by the deviance, the hyperbo...

  15. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  16. Systematic Continuum Errors in the Lyman-Alpha Forest and The Measured Temperature-Density Relation

    CERN Document Server

    Lee, Khee-Gan

    2011-01-01

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parametrized as a power-law, T ~ \\Delta^{\\gamma - 1}) of the inter-galactic medium (IGM) through the flux probability distribution function (PDF) of the Lyman-\\alpha\\ forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to e.g. a uniform low-absorption Gunn-Peterson component, could lead to errors in {\\gamma} of order unity. This is quantified further using a simple semi-analytic model of the Lya forest flux PDF. We find that under-(over-)estimates in the continuum level can lead to a lower (higher) measured value of \\gamma . At current observational limits, continuum biases significantly increase the error in {\\gamma} from \\sigma_{\\gamma} = 0.1 to \\sigma_{\\gamma} = 0.3 within our model. We argue that steps need to be taken to directly estimate the level of continuum bias in order to make recent cla...

  17. Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements

    Science.gov (United States)

    Kappel, David; Haus, Rainer; Arnold, Gabriele

    2015-08-01

    Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the

  18. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  19. A new method to reduce truncation errors in partial spherical near-field measurements

    DEFF Research Database (Denmark)

    Cano-Facila, F J; Pivnenko, Sergey

    A new and effective method for reduction of truncation errors in partial spherical near-field (SNF) measurements is proposed. The method is useful when measuring electrically large antennas, where the measurement time with the classical SNF technique is prohibitively long and an acquisition over...... hemisphere. To verify the effectiveness of the method, several examples are presented using both simulated and measured truncated near-field data....... the whole spherical surface is not practical. Therefore, to reduce the data acquisition time, partial sphere measurement is usually made, taking samples over a portion of the spherical surface in the direction of the main beam. But in this case, the radiation pattern is not known outside the measured...

  20. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  1. Solving Bivariate Polynomial Systems on a GPU

    International Nuclear Information System (INIS)

    We present a CUDA implementation of dense multivariate polynomial arithmetic based on Fast Fourier Transforms over finite fields. Our core routine computes on the device (GPU) the subresultant chain of two polynomials with respect to a given variable. This subresultant chain is encoded by values on a FFT grid and is manipulated from the host (CPU) in higher-level procedures. We have realized a bivariate polynomial system solver supported by our GPU code. Our experimental results (including detailed profiling information and benchmarks against a serial polynomial system solver implementing the same algorithm) demonstrate that our strategy is well suited for GPU implementation and provides large speedup factors with respect to pure CPU code.

  2. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    Science.gov (United States)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  3. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  4. Bivariate Rayleigh Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2007-01-01

    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  5. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  6. An examination of errors in characteristic curve measurements of radiographic screen/film systems.

    Science.gov (United States)

    Wagner, L K; Barnes, G T; Bencomo, J A; Haus, A G

    1983-01-01

    The precision and accuracy achieved in the measurement of characteristic curves for radiographic screen/film systems is quantitatively investigated for three techniques: inverse square, kVp bootstrap, and step-wedge bootstrap. Precision of all techniques is generally better than +/- 1.5% while the agreement among all intensity-scale techniques is better than 2% over the useful exposure latitude. However, the accuracy of the sensitometry will depend on several factors, including linearity and energy dependence of the calibration instrument, that may introduce larger errors. Comparisons of time-scale and intensity-scale methods are made and a means of measuring reciprocity law failure is demonstrated. PMID:6877185

  7. Error rates of Belavkin weighted quantum measurements and a converse to Holevo's asymptotic optimality theorem

    CERN Document Server

    Tyson, Jon

    2009-01-01

    We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a weighted measurement is asymptotically optimal only if it is quadratically weighted. Counterexamples for three states are constructed. The cube-weighted measurement of Ballester, Wehner, and Winter is also considered. Sufficient optimality conditions for various weights are compared.

  8. Error motion compensating tracking interferometer for the position measurement of objects with rotational degree of freedom

    Science.gov (United States)

    Holler, Mirko; Raabe, Jörg

    2015-05-01

    The nonaxial interferometric position measurement of rotating objects can be performed by imaging the laser beam of the interferometer to a rotating mirror which can be a sphere or a cylinder. This, however, requires such rotating mirrors to be centered on the axis of rotation as a wobble would result in loss of the interference signal. We present a tracking-type interferometer that performs such measurement in a general case where the rotating mirror may wobble on the axis of rotation, or even where the axis of rotation may be translating in space. Aside from tracking, meaning to measure and follow the position of the rotating mirror, the interferometric measurement errors induced by the tracking motion of the interferometer itself are optically compensated, preserving nanometric measurement accuracy. As an example, we show the application of this interferometer in a scanning x-ray tomography instrument.

  9. The effect of clock, media, and station location errors on Doppler measurement accuracy

    Science.gov (United States)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  10. Potential sources of errors in cation-exchange chromatographic measurement of plasma taurine.

    Science.gov (United States)

    Connolly, B M; Goodman, H O

    1980-03-01

    We examined the potential sources of error in automated cation-exchange chromatographic quantitation of plasma taurine, both in sample preparation and in the analysis. Principal sources of error include: use of serum instead of plasma, which produces gross overestimates; use of tripotassium ethylenediaminetetraacetate (EDTA) as anticoagulant in systems involving ninhydrin detection (a ninhydrin-positive contaminant of EDTA emerges coincident with taurine); contamination with platelets; and placing volumes exceeding 20 microL on the cartridge used in the Technicon TSM Amino Acid Analyzer. We arrived at a simple technique in which we use EDTA as anticoagulant, micropore filtration to produce platelet-free plasma, and o-phthalaldehyde as the detection reagent for the sensitivity required to measure accurately the low concentration of taurine in plasma. PMID:6767571

  11. Indirect measurement of machine tool motion axis error with single laser tracker

    Science.gov (United States)

    Wu, Zhaoyong; Li, Liangliang; Du, Zhengchun

    2015-02-01

    For high-precision machining, a convenient and accurate detection of motion error for machine tools is significant. Among common detection methods such as the ball-bar method, the laser tracker approach has received much more attention. As a high-accuracy measurement device, laser tracker is capable of long-distance and dynamic measurement, which increases much flexibility during the measurement process. However, existing methods are not so satisfactory in measurement cost, operability or applicability. Currently, a plausible method is called the single-station and time-sharing method, but it needs a large working area all around the machine tool, thus leaving itself not suitable for the machine tools surrounded by a protective cover. In this paper, a novel and convenient positioning error measurement approach by utilizing a single laser tracker is proposed, followed by two corresponding mathematical models including a laser-tracker base-point-coordinate model and a target-mirror-coordinates model. Also, an auxiliary apparatus for target mirrors to be placed on is designed, for which sensitivity analysis and Monte-Carlo simulation are conducted to optimize the dimension. Based on the method proposed, a real experiment using single API TRACKER 3 assisted by the auxiliary apparatus is carried out and a verification experiment using a traditional RENISHAW XL-80 interferometer is conducted under the same condition for comparison. Both results demonstrate a great increase in the Y-axis positioning error of machine tool. Theoretical and experimental studies together verify the feasibility of this method which has a more convenient operation and wider application in various kinds of machine tools.

  12. A nonparametric copula density estimator incorporating information on bivariate marginals

    OpenAIRE

    Cheng, Yu-Hsiang; Huang, Tzee-Ming

    2016-01-01

    We propose a copula density estimator that can include information on bivariate marginals when the information is available. We use B-splines for copula density approximation and include information on bivariate marginals via a penalty term. Our estimator satisfies the constraints for a copula density. Under mild conditions, the proposed estimator is consistent.

  13. Estimation of limiting availability for a stationary bivariate process

    OpenAIRE

    Abraham, B.; Balakrishna, N

    2000-01-01

    We estimate the limiting availability of a system when the operating and repair times form a stationary bivariate sequence. These estimators are shown to be consistent and asymptotically normal under certain conditions. In particular, we estimate the limiting availability for a bivariate exponential autoregressive process.

  14. Efficient Simulation of a Bivariate Exponential Conditionals Distribution

    OpenAIRE

    Yu, Yaming

    2009-01-01

    The bivariate distribution with exponential conditionals (BEC) is introduced by Arnold and Strauss [Bivariate distributions with exponential conditionals, J. Amer. Statist. Assoc. 83 (1988) 522--527]. This work presents a simple and fast algorithm for simulating random variates from this density.

  15. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  16. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  17. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  18. Random error analysis of marine xCO2 measurements in a coastal upwelling region

    Science.gov (United States)

    Reimer, Janet J.; Cueva, Alejandro; Gaxiola-Castro, Gilberto; Lara-Lara, Ruben; Vargas, Rodrigo

    2016-04-01

    Quantifying and identifying measurement error is an ongoing challenge for carbon cycle science to constrain measurable uncertainty related to the sources and sinks of CO2. One source of uncertainty in measurements is derived from random errors (ε); thus, it is important to quantify their magnitude and their relationship to environmental variability in order to constrain local-to-global carbon budgets. We applied a paired-observation method to determine ε associated with marine xCO2 in a coastal upwelling zone of an eastern boundary current. Continuous data (3-h resolution) from a mooring platform during upwelling and non-upwelling seasons was analyzed off of northern Baja California in the California Current. To test the rigor of the algorithm to calculate ε we propose a method for determining daily mean time series values that may be affected by ε. To do this we used either two or three variables in the function, but no significant differences for ε mean values were found due to the large variability in ε (-0.088 ± 27 ppm for two variables and -0.057 ± 28 ppm for three variables). Mean ε values were centered on zero, with low values of ε more frequent than greater values, and follow a double exponential distribution. Random error variability increased with higher magnitudes of xCO2, and in general, ε variability increased in relation to upwelling conditions (up to ∼9% of measurements). Increased ε during upwelling suggests the importance of meso-scale processes on ε variability and could have a large influence seasonal to annual CO2 estimates. This approach could be extended and modified to other marine carbonate system variables as part of data quality assurance/quality control and to quantify uncertainty (due to ε) from a wide variety of continuous oceanographic monitoring platforms.

  19. A Study on Measurement Error during Alternating Current Induced Voltage Tests on Large Transformers

    Institute of Scientific and Technical Information of China (English)

    WANG Xuan; LI Yun-ge; CAO Xiao-long; LIU Ying

    2006-01-01

    The large transformer is pivotal equipment in an electric power supply system; Its partial discharge test and the induced voltage withstand test on large transformers are carried out at a frequency about twice the working frequency. If the magnetizing inductance cannot compensate for the stray capacitance, the test sample turns into a capacitive load and a capacitive rise exhibits in the testing circuit. For self-restoring insulation, a method has been recommended in IEC60-1 that an unapproved measuring system be calibrated by an approved system at a voltage not less than 50% of the rated testing voltage, and the result then be extrapolated linearly. It has been found that this method leads to great error due to the capacitive rise if it is not correctly used during a withstand voltage test under certain testing conditions, especially for a test on high voltage transformers with large capacity. Since the withstand voltage test is the most important means to examine the operation reliability of a transformer, and it can be destructive to the insulation, a precise measurement must be guaranteed. In this paper a factor, named as the capacitive rise factor, is introduced to assess the rise. The voltage measurement error during the calibration is determined by the parameters of the test sample and the testing facilities, as well as the measuring point. Based on theoretical analysis in this paper, a novel method is suggested and demonstrated to estimate the error by using the capacitive rise factor and other known parameters of the testing circuit.

  20. Reduction of truncation errors in partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Cano Facila, Francisco J.

    2010-01-01

    In this report, a new and effective method for reduction of truncation errors in partial spherical near-field (SNF) antenna measurements is proposed. This method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the far-field...... pattern calculated from a truncated SNF measurement up to the whole forward hemisphere. The method is useful when measuring electrically large antennas and the measurement over the whole sphere is very time consuming. Therefore, a solution is considered to take samples over a portion of the spherical...... surface and then to apply the above method to reconstruct the far-field pattern. The work described in this report was carried out within the external stay of Francisco J. Cano at the Technical University of Denmark (DTU) from September 6th to December 18th in 2010....

  1. De Novo Correction of Mass Measurement Error in Low Resolution Tandem MS Spectra for Shotgun Proteomics

    Science.gov (United States)

    Egertson, Jarrett D.; Eng, Jimmy K.; Bereman, Michael S.; Hsieh, Edward J.; Merrihew, Gennifer E.; MacCoss, Michael J.

    2012-12-01

    We report an algorithm designed for the calibration of low resolution peptide mass spectra. Our algorithm is implemented in a program called FineTune, which corrects systematic mass measurement error in 1 min, with no input required besides the mass spectra themselves. The mass measurement accuracy for a set of spectra collected on an LTQ-Velos improved 20-fold from -0.1776 ± 0.0010 m/z to 0.0078 ± 0.0006 m/z after calibration (avg ± 95 % confidence interval). The precision in mass measurement was improved due to the correction of non-linear variation in mass measurement accuracy across the m/z range.

  2. Low-error and broadband microwave frequency measurement in a silicon chip

    CERN Document Server

    Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David

    2015-01-01

    Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.

  3. Home endotoxin exposure and wheeze in infants: correction for bias due to exposure measurement error.

    Science.gov (United States)

    Horick, Nora; Weller, Edie; Milton, Donald K; Gold, Diane R; Li, Ruifeng; Spiegelman, Donna

    2006-01-01

    Exposure to elevated levels of endotoxin in family-room dust was previously observed to be significantly associated with increased wheeze in the first year of life among a cohort of 404 children in the Boston, Massachusetts, metropolitan area. However, it is likely that family-room dust endotoxin was a surrogate for airborne endotoxin exposure. Therefore, a related substudy characterized the relationship between levels of airborne household endotoxin and the level of endotoxin present in house dust, in addition to identifying other significant predictors of airborne endotoxin in the home. We now reexamine the relationship between endotoxin exposure and wheeze under the assumption that the level of airborne endotoxin in the home is the exposure of interest and that the amount of endotoxin in household dust is a surrogate for this exposure. We applied a measurement error correction technique, using all available data to estimate the effect of endotoxin exposure in terms of airborne concentration and accounting for the measurement error induced by using house-dust endotoxin as a surrogate measure in the portion of the data in which airborne endotoxin could not be directly measured. After adjusting for confounding by lower respiratory infection status and race/ethnicity, endotoxin exposure was found to be significantly associated with a nearly 6-fold increase in prevalence of wheeze for a one interquartile range increase in airborne endotoxin (95% confidence interval, 1.2-26) among the 360 children in households with dust endotoxin levels between the 5th and 95th percentiles. PMID:16393671

  4. Measurement errors in multifrequency bioelectrical impedance analyzers with and without impedance electrode mismatch

    International Nuclear Information System (INIS)

    The purpose of this study is to compare measurement errors in two commercially available multi-frequency bioimpedance analyzers, a Xitron 4000B and an ImpediMed SFB7, including electrode impedance mismatch. The comparison was made using resistive electrical models and in ten human volunteers. We used three different electrical models simulating three different body segments: the right-side, leg and thorax. In the electrical models, we tested the effect of the capacitive coupling of the patient to ground and the skin–electrode impedance mismatch. Results showed that both sets of equipment are optimized for right-side measurements and for moderate skin–electrode impedance mismatch. In right-side measurements with mismatch electrode, 4000B is more accurate than SFB7. When an electrode impedance mismatch was simulated, errors increased in both bioimpedance analyzers and the effect of the mismatch in the voltage detection leads was greater than that in current injection leads. For segments with lower impedance as the leg and thorax, SFB7 is more accurate than 4000B and also shows less dependence on electrode mismatch. In both devices, impedance measurements were not significantly affected (p > 0.05) by the capacitive coupling to ground

  5. On systematic error considering at measuring in the system of accounting for and control of nuclear materials

    International Nuclear Information System (INIS)

    The problems of metrological provisions in control over nuclear materials retaining which assure the required measurement accuracy at all stages of technological processes are discussed. The role of the so-called uneliminated component of the systematic error which characterizes its measurement uncertainty and error is estimated. It is shown that uneliminated component of systematic error should be regarded as a determinate value when estimating errors of concrete measuring means. The conclusion is made that treatment of uneliminated component of the systematic error for concrete measuring device as a random value creates potentialities for increase of unaccounted materials volume in some cases connected with nuclear materials retaining if the true value of this component becomes certain

  6. Transient heat transfer measurements using thermochromic liquid crystal: lateral-conduction error

    Energy Technology Data Exchange (ETDEWEB)

    Kingsley-Rowe, James R. [Department of Mechanical Engineering, University of Bath, Bath BA2 7AY (United Kingdom); Lock, Gary D. [Department of Mechanical Engineering, University of Bath, Bath BA2 7AY (United Kingdom); Michael Owen, J. [Department of Mechanical Engineering, University of Bath, Bath BA2 7AY (United Kingdom)]. E-mail: j.m.owen@bath.ac.uk

    2005-04-01

    Thermochromic liquid crystal (TLC) can be used to measure the surface temperature in transient heat transfer experiments. Knowing the time at which the TLC changes colour, hence knowing the surface temperature at that time, it is possible to calculate the heat transfer coefficient, h, and the analytical one-dimensional solution of Fourier's conduction equation for a semi-infinite wall is often used for this purpose. However, the 1D solution disregards lateral variations of the surface temperature (that is, those variations parallel to the surface), which can cause a bias, or lateral-conduction error, in the calculated value of h. This paper shows how the 1D analytical solution can be used to estimate, and to provide a correction for, the error. An approximate two-dimensional analysis (which could be readily extended to three dimensions) is used to calculate the error, and a 2D finite-difference solution of Fourier's equation is used to validate the method.

  7. Errors in Measuring Transverse and Energy Jitter by Beam Position Monitors

    CERN Document Server

    Balandin, V; Golubeva, N

    2010-01-01

    The problem of errors, arising due to finite BPM resolution, in the difference orbit parameters, which are found as a least squares fit to the BPM data, is one of the standard and important problems of accelerator physics. Even so for the case of transversely uncoupled motion the covariance matrix of reconstruction errors can be calculated "by hand", the direct usage of obtained solution, as a tool for designing of a "good measurement system", does not look to be fairly straightforward. It seems that a better understanding of the nature of the problem is still desirable. We make a step in this direction introducing dynamic into this problem, which at the first glance seems to be static. We consider a virtual beam consisting of virtual particles obtained as a result of application of reconstruction procedure to "all possible values" of BPM reading errors. This beam propagates along the beam line according to the same rules as any real beam and has all beam dynamical characteristics, such as emittances, energy ...

  8. Measuring Coverage in MNCH: Total Survey Error and the Interpretation of Intervention Coverage Estimates from Household Surveys

    OpenAIRE

    Eisele, Thomas P; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Aluisio J. D. Barros; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpretin...

  9. Measuring The Influence of TAsk COMplexity on Human Error Probability: An Empirical Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Podofillini, Luca; Dang, Vinh N. [Paul Scherrer Institute, Villigen (Switzerland)

    2013-04-15

    A key input for the assessment of Human Error Probabilities (HEPs) with Human Reliability Analysis (HRA) methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs). In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM) measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations), and evaluates its use to represent (objectively and quantitatively) task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs) for which empirical evidence on the HEPs (albeit with large uncertainty) and influencing factors are available from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e. g., 'easy' vs. 'somewhat difficult'), while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1) the definitions of the PSFs cover the influences important for HRA (i. e., influencing the error probability), and 2) the quantitative relationships among PSFs and error

  10. Measuring The Influence of TAsk COMplexity on Human Error Probability: An Empirical Evaluation

    International Nuclear Information System (INIS)

    A key input for the assessment of Human Error Probabilities (HEPs) with Human Reliability Analysis (HRA) methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs). In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM) measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations), and evaluates its use to represent (objectively and quantitatively) task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs) for which empirical evidence on the HEPs (albeit with large uncertainty) and influencing factors are available from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e. g., 'easy' vs. 'somewhat difficult'), while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1) the definitions of the PSFs cover the influences important for HRA (i. e., influencing the error probability), and 2) the quantitative relationships among PSFs and error probability are

  11. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient.

    Science.gov (United States)

    Hindasageri, V; Vedula, R P; Prabhu, S V

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively. PMID:23464237

  12. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient

    Science.gov (United States)

    Hindasageri, V.; Vedula, R. P.; Prabhu, S. V.

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.

  13. THE TOTAL ERRORS IN MEASURING Epeak FOR GAMMA-RAY BURSTS

    International Nuclear Information System (INIS)

    Of all the observable quantities for gamma-ray bursts (GRBs), one of the most important is Epeak. Epeak is defined as the peak of the νν power spectrum from the prompt emission. While Epeak has been extensively used in the past, for example with luminosity indicators, it has not been thoroughly examined for possible sources of scatter. In the literature, the reported error bars for Epeak are the simple Poisson statistical errors. Additional uncertainties arise due to the choices made by analysts in determining Epeak (e.g., the start and stop times of integration), imperfect knowledge of the response of the detector, different energy ranges for various detectors, and differences in models used to fit the spectra. We examine the size of these individual sources of scatter by comparing many independent pairs of published Epeak values for the same bursts. Indeed, the observed scatter in multiple reports of the same burst (often with the same data) is greatly larger than the published statistical error bars. We measure that the 1σ uncertainty associated with the analyst's choices is 28%, i.e., 0.12 in log10(Epeak), with the resultant errors always being present. The errors associated with the detector response are negligibly small. The variations caused by commonly used alternative definitions of Epeak (such as present in all papers and in all compiled burst lists) are typically 23%-46%, although this varies substantially with the application. The implications of this are: (1) Even the very best measured Epeak values will have systematic uncertainties of 28%. (2) Thus, GRBs have a limitation in accuracy for a single event, with this being reducible by averaging many bursts. (3) The typical 1σ total uncertainty for collections of bursts is 55%. (4) We also find that the width of the distribution for Epeak in the burst frame must be near zero, implying that some mechanism must exist to thermostat GRBs. (5) Our community can only improve on this situation by using

  14. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    Science.gov (United States)

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing. PMID:26908319

  15. Sieve estimation of constant and time-varying coefficients in nonlinear ordinary differential equation models by considering both numerical error and measurement error

    CERN Document Server

    Xue, Hongqi; Wu, Hulin; 10.1214/09-AOS784

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge--Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the $p$-order numerical algorithm goes to zero at a rate faster than $n^{-1/(p\\wedge4)}$, the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we h...

  16. Correction of Measurement Error in Monthly USDA Pig Crop: Generating Alternative Data Series

    OpenAIRE

    Kim, In Seck; Plain, Ronald L.; Bullock, J. Bruce; Jei, Sang Young

    2008-01-01

    The imputed pig death loss contained in the reported monthly U.S. Department of Agriculture (USDA) pig crop data over the December 1995–June 2006 period ranged from 24.93% to 12.75%. Clearly, there are substantial measurement errors in the USDA monthly pig crop data. In this paper, we present alternative monthly U.S. pig crop data using the biological production process, which is compatible with prior knowledge of the U.S. hog industry. Alternative pig crop data are applied to a slaughter h...

  17. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    Science.gov (United States)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  18. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    OpenAIRE

    McCall, Peter L.

    2001-01-01

    Beginning in 1990, the US Army Aberdeen Test Center (ATC) began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the ...

  19. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples

    Directory of Open Access Journals (Sweden)

    Robert H. Lyles

    2015-11-01

    Full Text Available Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error.

  20. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    Science.gov (United States)

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  1. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718

  2. Operationally-motivated uncertainty relations for joint measurability and the error-disturbance tradeoff

    International Nuclear Information System (INIS)

    We derive uncertainty relations for both joint measurability and the error-disturbance tradeoff in terms of the probability of distinguishing the actual operation of a device from its hypothetical ideal. Our relations provide a clear operational interpretation of two main aspects of the uncertainty principle, as originally formulated by Heisenberg. The first restricts the joint measurability of observables, stating that noncommuting observables can only be simultaneously determined with a characteristic amount of indeterminacy. The second describes an error-disturbance tradeoff, noting that the more precise a measurement of one observable is made, the greater the disturbance to noncommuting observables. Our relations are explicitly state-independent and valid for arbitrary observables of discrete quantum systems, and are also applicable to the case of position and momentum observables. They may be directly applied in information processing settings, for example to infer that devices which can faithfully transmit information regarding one observable do not leak information about conjugate observables to the environment. Though intuitively apparent from Heisenberg's original arguments, only limited versions of this statement have previously been formalized.

  3. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  4. Nonparametric Estimation of a Bivariate Survival Function in the Presence of Censoring

    OpenAIRE

    Tsai, Wei-Yann; Leurgans, Sue; Crowley, John

    1986-01-01

    A new family of estimators of a bivariate survival function based on censored vectors is obtained from a decomposition of the bivariate survival function. These estimators are uniformly consistent under bivariate censoring and are self-consistent under univariate censoring.

  5. Reduction of truncation errors in planar, cylindrical, and partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel

    2012-01-01

    A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...... hemisphere. The extension of the valid region is achieved by the iterative application of atransformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave......, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method....

  6. Measuring of Block Error Rates in High-Speed Digital Networks

    Directory of Open Access Journals (Sweden)

    Petr Ivaniga

    2006-01-01

    Full Text Available Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – TG.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that theseare invariant in terms of time.

  7. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    OpenAIRE

    Dichev D.; Koev H.; Bakalova T.; Louda P.

    2014-01-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error ...

  8. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.; Jin, Y.; Hong, W.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature and f...

  9. The bivariate Rogers-Szegoe polynomials

    International Nuclear Information System (INIS)

    We present an operator approach to deriving Mehler's formula and the Rogers formula for the bivariate Rogers-Szegoe polynomials hn(x, y vertical bar q). The proof of Mehler's formula can be considered as a new approach to the nonsymmetric Poisson kernel formula for the continuous big q-Hermite polynomials Hn(x; a vertical bar q) due to Askey, Rahman and Suslov. Mehler's formula for hn(x, y vertical bar q) involves a 3Φ2 sum and the Rogers formula involves a 2Φ1 sum. The proofs of these results are based on parameter augmentation with respect to the q-exponential operator and the homogeneous q-shift operator in two variables. By extending recent results on the Rogers-Szegoe polynomials hn(x vertical bar q) due to Hou, Lascoux and Mu, we obtain another Rogers-type formula for hn(x, y vertical bar q). Finally, we give a change of base formula for Hn(x; a vertical bar q) which can be used to evaluate some integrals by using the Askey-Wilson integral

  10. Research Into the Collimation and Horizontal Axis Errors Influence on the Z+F Laser Scanner Accuracy of Verticality Measurement

    Science.gov (United States)

    Sawicki, J.; Kowalczyk, M.

    2016-06-01

    Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.

  11. Studying the measurement errors for the density of neutron beam from a reactor core by the gold foil activation method

    International Nuclear Information System (INIS)

    Applicability of the gold foil activation method for precise measurements of density of a neutron beam extracted from the reactor core is investigated experimentally. Comparison of density ratios of cold and hot beams is carried out to determine the error of measurements conducted with the use of gold foils and the detector with 6LiF target. Based on the analysis of the data obtained it is concluded that the total error of measurements using the activation method, comprising errors of determining cross section of gold activation Δσ=+-0.3% and absolute value of foil activity (also +- 0.3%) makes up +-0.7%

  12. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    Science.gov (United States)

    Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.

    2015-12-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of 0.4 ≤ zabs ≤ 2.3 observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of Δα/α = (0.22 ± 0.23) × 10-5, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular, we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of Δα/α measurements, thus unnecessarily reducing the overall precision. We further show that fitting absorption systems with too few velocity components also results in a significant increase in the scatter of Δα/α measurements, and in addition causes Δα/α error estimates to be systematically underestimated. These results thus identify some of the potential pitfalls in analysis techniques and provide a guide for future analyses.

  13. Copula bivariate probit models: with an application to medical expenditures.

    Science.gov (United States)

    Winkelmann, Rainer

    2012-12-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. PMID:22025413

  14. A Bivariate Analogue to the Composed Product of Polynomials

    Institute of Scientific and Technical Information of China (English)

    Donald Mills; Kent M. Neuerburg

    2003-01-01

    The concept of a composed product for univariate polynomials has been explored extensively by Brawley, Brown, Carlitz, Gao,Mills, et al. Starting with these fundamental ideas andutilizing fractional power series representation(in particular, the Puiseux expansion) of bivariate polynomials, we generalize the univariate results. We define a bivariate composed sum,composed multiplication,and composed product (based on function composition). Further, we investigate the algebraic structure of certain classes of bivariate polynomials under these operations. We also generalize a result of Brawley and Carlitz concerningthe decomposition of polynomials into irreducibles.

  15. Three-dimensional shape optical measurement using constant gap control and error compensation

    International Nuclear Information System (INIS)

    The optical laser displacement sensor is widely used for noncontact measurement of the three-dimensional (3D) shape profile of the object surface. When the surface of an object has a slope variation, the sensor gain is proportionally varied according to that of the object surface. In order to solve the sensor gain variation problem, the constant gap control method is applied to adjust the gap to the nominal distance. Control error compensation is also proposed to cope with the situation even when the gap is not perfectly controlled to the nominal distance using an additional sensor attached to the actuator. 3D shape measurement applying the proposed constant gap control method shows better performances rather than the constant sensor height method

  16. Accuracy and uncertainty in radiochemical measurements. Learning from errors in nuclear analytical chemistry

    International Nuclear Information System (INIS)

    A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)

  17. Image Segmentation Method Based On Finite Doubly Truncated Bivariate Gaussian Mixture Model with Hierarchical Clustering

    Directory of Open Access Journals (Sweden)

    G V S Rajkumar

    2011-07-01

    Full Text Available Image segmentation is one of the most important area of image retrieval. In colour image segmentation the feature vector of each image region is 'n' dimension different from grey level image. In this paper a new image segmentation algorithm is developed and analyzed using the finite mixture of doubly truncated bivariate Gaussian distribution by integrating with the hierarchical clustering. The number of image regions in the whole image is determined using the hierarchical clustering algorithm. Assuming that a bivariate feature vector (consisting of Hue angle and Saturation of each pixel in the image region follows a doubly truncated bivariate Gaussian distribution, the segmentation algorithm is developed. The model parameters are estimated using EM-Algorithm, the updated equations of EM-Algorithm for a finite mixture of doubly truncated Gaussian distribution are derived. A segmentation algorithm for colour images is proposed by using component maximum likelihood. The performance of the developed algorithm is evaluated by carrying out experimentation with five images taken form Berkeley image dataset and computing the image segmentation metrics like, Global Consistency Error (GCE, Variation of Information (VOI, and Probability Rand Index (PRI. The experimentation results show that this algorithm outperforms the existing image segmentation algorithms.

  18. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    Science.gov (United States)

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has

  19. Digital holographic particle image velocimetry: eliminating a sign-ambiguity error and a bias error from the measured particle field displacement

    International Nuclear Information System (INIS)

    In a typical digital holographic PIV recording set-up, the reference beam and the object beam propagate towards the recording device along parallel axes. Consequently, in a reconstructed volume, the real image of the recorded particle field and the speckle pattern that originates from the virtual image of the recorded particle field overlap. If the recorded particle field experiences a longitudinal displacement between two recordings and if the two reconstructed complex-amplitude fields are analysed with a 3D correlation analysis, two separate peaks appear in the resulting correlation-coefficient volume. The two peaks are located at opposite longitudinal positions. One peak is related to the displacement of the real image and the other peak is related to the displacement of the speckle pattern that originates from the virtual image. Because both peaks have a comparable height, a sign ambiguity appears in the longitudinal component of the measured particle field displacement. Additionally, the measured longitudinal particle field displacement suffers from a bias error. The sign ambiguity and the bias error can be suppressed by applying a threshold operation to the reconstructed amplitude. The sign ambiguity, characterized by Γ, is suppressed by more than a factor of 60. The dimensionless bias error is reduced by a factor of 5

  20. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    Directory of Open Access Journals (Sweden)

    S. Houweling

    2013-10-01

    Full Text Available This study investigates the use of total column CH4 (XCH4 retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003–2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27–35 Tg yr−1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr−1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  1. Measuring errors and violations on the road: a bifactor modeling approach to the Driver Behavior Questionnaire.

    Science.gov (United States)

    Rowe, Richard; Roman, Gabriela D; McKenna, Frank P; Barker, Edward; Poulter, Damian

    2015-01-01

    The Driver Behavior Questionnaire (DBQ) is a self-report measure of driving behavior that has been widely used over more than 20 years. Despite this wealth of evidence a number of questions remain, including understanding the correlation between its violations and errors sub-components, identifying how these components are related to crash involvement, and testing whether a DBQ based on a reduced number of items can be effective. We address these issues using a bifactor modeling approach to data drawn from the UK Cohort II longitudinal study of novice drivers. This dataset provides observations on 12,012 drivers with DBQ data collected at .5, 1, 2 and 3 years after passing their test. A bifactor model, including a general factor onto which all items loaded, and specific factors for ordinary violations, aggressive violations, slips and errors fitted the data better than correlated factors and second-order factor structures. A model based on only 12 items replicated this structure and produced factor scores that were highly correlated with the full model. The ordinary violations and general factor were significant independent predictors of crash involvement at 6 months after starting independent driving. The discussion considers the role of the general and specific factors in crash involvement. PMID:25463951

  2. Analysis of measurement errors for Thomson diagnostics of non-Maxwellian plasmas in tokamak reactors

    Science.gov (United States)

    Sdvizhenskii, P. A.; Kukushkin, A. B.; Kurskiev, G. S.; Mukhin, E. E.; Bassan, M.

    2016-01-01

    The study is stimulated by the expected noticeable deviation of the electron velocity distribution function (eVDF) from a Maxwellian under condition of a strong auxiliary heating of electron plasmas in tokamak-reactors. The key principles of accuracy estimation of the Thomson scattering diagnostic of non-Maxwellian plasmas in tokamak-reactors are presented. The algorithm extends the conventional approach to the assessment of non-Maxwellian plasmas measurements errors for a broad class of deviations of the eVDF from a Maxwellian. The algorithm is based on solving the inverse problem many times to determine main parameters of the eVDF with allowance for all possible sources of error and statistical variation of the input parameters of the problem. The method is applied to a preliminary analysis of the advantages of the formerly suggested use of various wavelengths of probing laser radiation in the Thomson diagnostics of non-Maxwellian plasma on the example of the core plasma Thomson scattering diagnostic system which is under design for ITER tokamak. The results obtained confirm the relevance of the diversification of the probing laser radiation wavelength.

  3. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    OpenAIRE

    Xue Li; Jiuchun Jiang; Caiping Zhang; Le Yi Wang; Linfeng Zheng

    2015-01-01

    State of charge (SOC) is one of the most important parameters in battery management system (BMS). There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. ...

  4. Effects of Measurement Errors on Population Estimates from Samples Generated from a Stratified Population through Systematic Sampling Technique

    OpenAIRE

    Abel OUKO; Cheruiyot W. KIPKOECH; Emily KIRIMI

    2014-01-01

    In various surveys, presence of measurement errors has led to misleading results in estimation of various population parameters. This study indicates the effects of measurement errors on estimates of population total and population variance when samples are drawn using systematic sampling technique from a stratified population. A finite population was generated through simulation. The population was then stratified into four strata followed by generation of ten samples in each of them using s...

  5. Estimating Usual Dietary Intake Distributions: Adjusting for Measurement Error and Nonnormality in 24-Hour Food Intake Data

    OpenAIRE

    Nusser, Sarah M; Fuller, Wayne A.; Guenther, Patricia M.

    1995-01-01

    The authors have developed a method for estimating the distribution of an unobservable random variable from data that are subject to considerable measurement error and that arise from a mixture of two populations, one having a single-valued distribution and the other having a continuous unimodal distribution. The method requires that at least two positive intakes be recorded for a subset of the subjects in order to estimate the variance components for the measurement error model. Published in...

  6. Recursive Numerical Evaluation of the Cumulative Bivariate Normal Distribution

    OpenAIRE

    Christian Meyer

    2013-01-01

    We propose an algorithm for evaluation of the cumulative bivariate normal distribution, building upon Marsaglia's ideas for evaluation of the cumulative univariate normal distribution. The algorithm is mathematically transparent, delivers competitive performance and can easily be extended to arbitrary precision.

  7. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main

  8. Analysis of the influence of measuring errors in experimental determinations of the mass and heat transfer coefficients

    International Nuclear Information System (INIS)

    The paper analyses the influence of measuring errors of the operation parameters (flows, temperatures, pressures, and concentrations) in the experimental determination of the mass and heat transfer coefficients. Data obtained on experimental plants for hydrogen isotopes separation, by hydrogen distillation and water distillation, and calculus model for errors propagation are presented. The results are tabulated. The variation intervals of transfer coefficients are marked graphically. The study of the measuring errors is an intermediate stage, extremely important, in experimental determination of criterion relation coefficients, specific relations for B7 structured packing. (authors)

  9. THE IMPACT OF THE BEFORE-AFTER ERROR TERM CORRELATION ON WELFARE MEASUREMENT IN LOGIT

    OpenAIRE

    Paolo Delle Site; Marco Valerio Salucci

    2012-01-01

    We consider random utility models with independent and identical type I extreme value distribution of the error terms. To compute the expectation of the compensating variation it is necessary to consider the correlation of the error terms between the state before the price and quality change and the state after. We investigate the impact of the before-after correlation of the error terms on the expectation of the compensating variation. We consider each error term to be correlated between the...

  10. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  11. Copula bivariate probit models: with an application to medical expenditures

    OpenAIRE

    Winkelmann, Rainer

    2011-01-01

    The bivariate probit model is frequently used for estimating the eff*ect of an endogenous binary regressor (the "treatment") on a binary health outcome variable. This paper discusses simple modifi*cations that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank ...

  12. Robustness of the sample correlation - the bivariate lognormal case

    OpenAIRE

    J. C. W. Rayner; C. D. Lai; Hutchinson, T P

    1999-01-01

    The sample correlation coefficient R is almost universally used to estimate the population correlation coefficient ρ. If the pair (X,Y) has a bivariate normal distribution, this would not cause any trouble. However, if the marginals are nonnormal, particularly if they have high skewness and kurtosis, the estimated value from a sample may be quite different from the population correlation coefficient ρ.The bivariate lognormal is chosen as our case study for this robustness study. Two approache...

  13. Correcting the error in neutron moisture probe measurements caused by a water density gradient

    International Nuclear Information System (INIS)

    If a neutron probe lies in or near a water density gradient, the probe may register a water density different to that at the measuring point. The effect of a thin stratum of soil containing an excess or depletion of water at various distances from a probe in an otherwise homogeneous system has been calculated, producing an 'importance' curve. The effect of these strata can be integrated over the soil region in close proximity to the probe resulting in the net effect of the presence of a water density gradient. In practice, the probe is scanned through the point of interest and the count rate at that point is corrected for the influence of the water density on each side of it. An example shows that the technique can reduce an error of 10 per cent to about 2 per cent

  14. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    CERN Document Server

    Whitmore, J B

    2014-01-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10\\,m\\,s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100\\,\\AA) and entire spectrographs arms ($\\sim$1000--3000\\,\\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200\\,m\\,s$^{-1}$\\,per 1000\\,\\AA. We apply a simple model of these distortions to simulated spectra which characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the ...

  15. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  16. Thermal neutron induced soft error rate measurement in semiconductor memories and circuits

    International Nuclear Information System (INIS)

    Soft error rate (SER) testing and measurements of semiconductor circuits with different operating voltages and operating conditions have been performed using the thermal neutron beam at the Radiation Science and Engineering Center (RSEC) at Penn State University. The high neutron flux allows for accelerated testing for SER by increasing reaction rate densities inside the tested device that gives more precision in the experimental data with lower experimental run time. The effect of different operating voltages and operating conditions on INTEL PXA270 processor has been experimentally determined. Experimental results showed that the main failure mechanism was the segmentation faults in the system. Failure response of the system to the operating conditions was in agreement with the general behavior of SERs. (author)

  17. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    Observatory tower. The approximate magnitude of the error due to spatial and temporal pulse volume separation is presented as a function of mean wind angle relative to the sodar configuration and for several antenna pulsing orders. Sodar-derived standard deviations of the lateral wind component, before and......For pt.I see ibid., vol.3, no.3, p.523-8 (1986). The authors use the theoretical results presented in part I to correct turbulence parameters derived from monostatic sodar wind measurements in an attempt to improve the statistical comparisons with the sonic anemometers on the Boulder Atmospheric...... after the application of the spatial and temporal volume separation correction, are presented. The improvement appears to be significant. The effects of correcting for pulse volume averaging derived in part I are also discussed...

  18. High Accuracy On-line Measurement Method of Motion Error on Machine Tools Straight-going Parts

    Institute of Scientific and Technical Information of China (English)

    苏恒; 洪迈生; 魏元雷; 李自军

    2003-01-01

    Harmonic suppression, non-periodic and non-closing in straightness profile error that will bring about harmonic component distortion in measurement result are analyzed. The countermeasure-a novel accurate two-probe method in time domain is put forward to measure straight-going component motion error in machine tools based on the frequency domain 3-point method after symmetrical continuation of probes' primitive signal. Both straight-going component motion error in machine tools and the profile error in workpiece that is manufactured on this machine can be measured at the same time. The information is available to diagnose the fault origin of machine tools. The analysis result is proved to be correct by the experiment.

  19. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199 Hg atom

    Science.gov (United States)

    Chen, Yi; Graner, Brent; Heckel, Blayne; Lindahl, Eric

    2016-05-01

    This talk provides a discussion of the systematic errors that were encountered in the 199 Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199 Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  20. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    International Nuclear Information System (INIS)

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  1. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    1999-05-06

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity to error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.

  2. Calibration of a camera–projector measurement system and error impact analysis

    International Nuclear Information System (INIS)

    In the camera–projector measurement system, calibration is a key to the measurement accuracy; especially, it is more difficult to obtain the same calibration accuracy for projector than camera due to the inaccurate corresponding relationship between its calibration points and imaging points. Thus, based on stereo vision measurement models of the camera and the projector, a calibration method with direct linear transformation (DLT) and bundle adjustment (BA) is introduced to adjust the corresponding relationships for better optimization purpose in this paper, which minimize the effect of inaccurate calibration points. And an integral method is presented to improve the precision of projection patterns to compensate the projector resolution limitation. Moreover impacts of system parameter and calibration points errors are evaluated when the calibration points positions change, which not only provides theoretical guidance for the rational layout of the calibration points, but also can be used for the optimization of system structure. Finally, the calibration of the system is carried out and the experiment results show that better precision can be achieved with those processes. (paper)

  3. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    Science.gov (United States)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  4. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  5. Determinantal and permanental representation of generalized bivariate Fibonacci p-polynomials

    OpenAIRE

    Kaygisiz, Kenan; Sahin, Adem

    2011-01-01

    In this paper, we give some determinantal and permanental representations of generalized bivariate Fibonacci p-polynomials by using various Hessenberg matrices. The results that we obtained are important since generalized bivariate Fibonacci p-polynomials are general form of, for example, bivariate Fibonacci and Pell p-polynomials, second kind Chebyshev polynomials, bivariate Jacobsthal polynomials etc.

  6. Random Walks with Bivariate Levy-Stable Jumps in Comparison with Levy Flights

    International Nuclear Information System (INIS)

    In this paper we compare the Levy flight model on a plane with the random walk resulting from bivariate Levy-stable random jumps with the uniform spectral measure. We show that, in general, both processes exhibit similar properties, i.e. they are characterized by the presence of the jumps with extremely large lengths and uniformly distributed directions (reflecting the same heavy-tail behavior and the spherical symmetry of the jump distributions), connecting characteristic clusters of short steps. The bivariate Levy-stable random walks, belonging to the class of the well investigated stable processes, can enlarge the class of random-walk models for transport phenomena if other than uniform spectral measures are considered. (author)

  7. Prediction of rainfall intensity measurement errors using commercial microwave communication links

    Directory of Open Access Journals (Sweden)

    A. Zinevich

    2010-10-01

    Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.

    In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.

    The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.

    The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple

  8. Effects of Measurement Errors on Population Estimates from Samples Generated from a Stratified Population through Systematic Sampling Technique

    Directory of Open Access Journals (Sweden)

    Abel OUKO

    2014-11-01

    Full Text Available In various surveys, presence of measurement errors has led to misleading results in estimation of various population parameters. This study indicates the effects of measurement errors on estimates of population total and population variance when samples are drawn using systematic sampling technique from a stratified population. A finite population was generated through simulation. The population was then stratified into four strata followed by generation of ten samples in each of them using systematic sampling technique. In each stratum a sample was picked at random. The findings of this work indicated that systematic errors affected the accuracy of the estimates by overestimating both the population total and the population variance. Random errors only added variability to the data but their effect on the estimates of the population total and population variance was not that profound.

  9. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol

    OpenAIRE

    Raban, Magdalena Z; Scott R Walter; Douglas, Heather E.; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-01-01

    Introduction Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. Methods and analysis The study will be conducte...

  10. Coordinate-Momentum Intermediate Representation and Marginal Distributions of Quantum Mechanical Bivariate Normal Distribution

    International Nuclear Information System (INIS)

    We introduce bivariate normal distribution operator for state vector |ψ> and find that its marginal distribution leads to one-dimensional normal distribution corresponding to the measurement probability |λ,ν|2, where |x>λ,ν is the coordinate-momentum intermediate representation. As a by-product, the one-dimensional normal distribution in statistics can be explained as a Radon transform of two-dimensional Gaussian function.

  11. Interpretation of Falling-Head Tests in Presence of Random Measurement Error

    OpenAIRE

    Chiasson, Paul

    2012-01-01

    Field data are tainted by random and several types of systematic errors. The paper presents a review of interpretation methods for falling-head tests. The statistical robustness of each method is then evaluated through the use of synthetic data tainted by random error. Six synthetic datasets are used for this evaluation. Each dataset has an average relative error for water elevation Z, respectively, of 0.04%, 0.11%, 0.22%, 0.34%, 0.45%, and 0.90% (absolute errors on elevation are, respectivel...

  12. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  13. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    International Nuclear Information System (INIS)

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  14. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  15. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    Science.gov (United States)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  16. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  17. Chromosome heteromorphism quantified by high-resolution bivariate flow karyotyping.

    Science.gov (United States)

    Trask, B; van den Engh, G; Mayall, B; Gray, J W

    1989-11-01

    Maternal and paternal homologues of many chromosome types can be differentiated on the basis of their peak position in Hoechst 33258 versus chromomycin A3 bivariate flow karyotypes. We demonstrate here the magnitude of DNA content differences among normal chromosomes of the same type. Significant peak-position differences between homologues were observed for an average of four chromosome types in each of the karyotypes of 98 different individuals. The frequency of individuals with differences in homologue peak positions varied among chromosome types: e.g., chromosome 15, 61%; chromosome 3, 4%. Flow karyotypes of 33 unrelated individuals were compared to determine the range of peak position among normal chromosomes. Chromosomes Y, 21, 22, 15, 16, 13, 14, and 19 were most heteromorphic, and chromosomes 2-8 and X were least heteromorphic. The largest chromosome 21 was 45% larger than the smallest 21 chromosome observed. The base composition of the variable regions differed among chromosome types. DNA contents of chromosome variants determined from flow karyotypes were closely correlated to measurements of DNA content made of gallocyanin chrome alum-stained metaphase chromosomes on slides. Fluorescence in situ hybridization with chromosome-specific repetitive sequences indicated that variability in their copy number is partly responsible for peak-position variability in some chromosomes. Heteromorphic chromosomes are identified for which parental flow karyotype information will be essential if de novo rearrangements resulting in small DNA content changes are to be detected with flow karyotyping. PMID:2479266

  18. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    Directory of Open Access Journals (Sweden)

    Thomas P Eisele

    Full Text Available Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  19. ANALYSIS OF CASE-CONTROL DATA WITH COVARIATE MEASUREMENT ERROR: APPLICATION TO DIET AND COLON CANCER

    Science.gov (United States)

    We propose a method for estimating odds ratios from case-control data in which ovariates are subject to mesurement error. he mesurement error may contain both a random component and a systematic difference between cases and controls (recall bias). ultivariate normal discriminant ...

  20. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy; Breinbjerg, Olav

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  1. Identification of Error Sources in High Precision Weight Measurements of Gyroscopes

    CERN Document Server

    Lőrincz, I

    2015-01-01

    A number of weight anomalies have been reported in the past with respect to gyroscopes. Much attention was gained from a paper in Physical Review Letters, when Japanese scientists announced that a gyroscope loses weight up to $0.005\\%$ when spinning only in the clockwise rotation with the gyroscope's axis in the vertical direction. Immediately afterwards, a number of other teams tried to replicate the effect, obtaining a null result. It was suggested that the reported effect by the Japanese was probably due to a vibration artifact, however, no final conclusion on the real cause has been obtained. We decided to build a dedicated high precision setup to test weight anomalies of spinning gyroscopes in various configurations. A number of error sources like precession and vibration and the nature of their influence on the measurements have been clearly identified, which led to the conclusive explanation of the conflicting reports. We found no anomaly within $\\Delta m/m<2.6 \\times 10^{-6}$ valid for both horizon...

  2. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    Science.gov (United States)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  3. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    Science.gov (United States)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  4. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    CERN Document Server

    Wilczynska, Michael R; King, Julian A; Murphy, Michael T; Bainbridge, Matthew B; Flambaum, Victor V

    2015-01-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of $0.4 \\leq z_{abs} \\leq 2.3$ observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of $\\Delta\\alpha/\\alpha=\\left(0.22\\pm0.23\\right)\\times10^{-5}$, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of $\\Delta\\alpha/\\alpha$ measurements, th...

  5. Statistical Inference for Regression Models with Covariate Measurement Error and Auxiliary Information.

    Science.gov (United States)

    You, Jinhong; Zhou, Haibo

    2009-01-01

    We consider statistical inference on a regression model in which some covariables are measured with errors together with an auxiliary variable. The proposed estimation for the regression coefficients is based on some estimating equations. This new method alleates some drawbacks of previously proposed estimations. This includes the requirment of undersmoothing the regressor functions over the auxiliary variable, the restriction on other covariables which can be observed exactly, among others. The large sample properties of the proposed estimator are established. We further propose a jackknife estimation, which consists of deleting one estimating equation (instead of one obervation) at a time. We show that the jackknife estimator of the regression coefficients and the estimating equations based estimator are asymptotically equivalent. Simulations show that the jackknife estimator has smaller biases when sample size is small or moderate. In addition, the jackknife estimation can also provide a consistent estimator of the asymptotic covariance matrix, which is robust to the heteroscedasticity. We illustrate these methods by applying them to a real data set from marketing science. PMID:22199460

  6. Error in interpreting field chlorophyll fluorescence measurements: heat gain from solar radiation

    International Nuclear Information System (INIS)

    Temperature and chlorophyll fluorescence characteristics were determined on leaves of various horticultural species following a dark adaptation period where dark adaptation cuvettes were shielded from or exposed to solar radiation. In one study, temperature of Swietenia mahagoni (L.) Jacq. leaflets within cuvettes increased from approximately 36C to approximately 50C during a 30-minute exposure to solar radiation. Alternatively, when the leaflets and cuvettes were shielded from solar radiation, leaflet temperature declined to 33C in 10 to 15 minutes. In a second study, 16 horticultural species exhibited a lower variable: maximum fluorescence (Fv:Fm) when cuvettes were exposed to solar radiation during the 30-minute dark adaptation than when cuvettes were shielded. In a third study with S. mahagoni, the influence of self-shielding the cuvettes by wrapping them with white tape, white paper, or aluminum foil on temperature and fluorescence was compared to exposing or shielding the entire leaflet and cuvette. All of the shielding methods reduced leaflet temperature and increased the Fv:Fm ratio compared to leaving cuvettes exposed. These results indicate that heat stress from direct exposure to solar radiation is a potential source of error when interpreting chlorophyll fluorescence measurements on intact leaves. Methods for moderating or minimizing radiation interception during dark adaptation are recommended. (author)

  7. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  8. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    CERN Document Server

    Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A

    2016-01-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....

  9. A comparison of techniques to optimize measurement of voltage changes in electrical impedance tomography by minimizing phase shift errors.

    Science.gov (United States)

    Fitzgerald, A J; Holder, D S; Eadie, L; Hare, C; Bayford, R H

    2002-06-01

    In electrical impedance tomography, errors due to stray capacitance may be reduced by optimization of the reference phase of the demodulator. Two possible methods, maximization of the demodulator output and minimization of reciprocity error have been assessed, applied to each electrode combination individually, or to all combinations as a whole. Using an EIT system with a single impedance measuring circuit and multiplexer to address the 16 electrodes, the methods were tested on resistor-capacitor networks, saline-filled tanks and humans during variation of the saline concentration of a constant fluid volume in the stomach. Optimization of each channel individually gave less error, particularly on humans, and maximization of the output of the demodulator was more robust. This method is, therefore, recommended to optimize systems and reduce systematic errors with similar EIT systems. PMID:12166864

  10. Comparison of Error Estimations by DERs in One-Port S and SLO Calibrated VNA Measurements and Application

    CERN Document Server

    Yannopoulou, Nikolitsa

    2011-01-01

    In order to demonstrate the usefulness of the only one existing method for systematic error estimations in VNA (Vector Network Analyzer) measurements by using complex DERs (Differential Error Regions), we compare one-port VNA measurements after the two well-known calibration techniques: the quick reflection response, that uses only a single S (Short circuit) standard, and the time-consuming full one-port, that uses a triple of SLO standards (Short circuit, matching Load, Open circuit). For both calibration techniques, the comparison concerns: (a) a 3D geometric representation of the difference between VNA readings and measurements, and (b) a number of presentation figures for the DERs and their polar DEIs (Differential Error Intervals) of the reflection coefficient, as well as, the DERs and their rectangular DEIs of the corresponding input impedance. In this paper, we present the application of this method to an AUT (Antenna Under Test) selected to highlight the existence of practical cases in which the time ...

  11. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  12. Spectral characteristics of time-dependent orbit errors in altimeter height measurements

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1993-01-01

    A mean reference surface and time-dependent orbit errors are estimated simultaneously for each exact-repeat ground track from the first two years of Geosat sea level estimates based on the Goddard Earth model (GEM)-T2 orbits. Motivated by orbit theory and empirical analysis of Geosat data, the time-dependent orbit errors are modeled as 1 cycle per revolution (cpr) sinusoids with slowly varying amplitude and phase. The method recovers the known 'bow tie effect' introduced by the existence of force model errors within the precision orbit determination (POD) procedure used to generate the GEM-T2 orbits. The bow tie pattern of 1-cpr orbit errors is characterized by small amplitudes near the middle and larger amplitudes (up to 160 cm in the 2 yr of data considered here) near the ends of each 5- to 6-day orbit arc over which the POD force model is integrated. A detailed examination of these bow tie patterns reveals the existence of daily modulations of the amplitudes of the 1-cpr sinusoid orbit errors with typical and maximum peak-to-peak ranges of about 14 cm and 30 cm, respectively. The method also identifies a daily variation in the mean orbit error with typical and maximum peak-to-peak ranges of about 6 and 30 cm, respectively, that is unrelated to the predominant 1-cpr orbit error. Application of the simultaneous solution method to the much less accurate Geosat height estimates based on the Naval Astronautics Group orbits concludes that the accuracy of POD is not important for collinear altimetric studies of time-dependent mesoscale variability (wavelengths shorter than 1000 km), as long as the time-dependent orbit errors are dominated by 1-cpr variability and a long-arc (several orbital periods) orbit error estimation scheme such as that presented here is used.

  13. Impact of shrinking measurement error budgets on qualification metrology sampling and cost

    Science.gov (United States)

    Sendelbach, Matthew; Sarig, Niv; Wakamoto, Koichi; Kim, Hyang Kyun (Helen); Isbester, Paul; Asano, Masafumi; Matsuki, Kazuto; Vaid, Alok; Osorio, Carmen; Archie, Chas

    2014-04-01

    When designing an experiment to assess the accuracy of a tool as compared to a reference tool, semiconductor metrologists are often confronted with the situation that they must decide on the sampling strategy before the measurements begin. This decision is usually based largely on the previous experience of the metrologist and the available resources, and not on the statistics that are needed to achieve acceptable confidence limits on the final result. This paper shows a solution to this problem, called inverse TMU analysis, by presenting statistically-based equations that allow the user to estimate the needed sampling after providing appropriate inputs, allowing him to make important "risk vs. reward" sampling, cost, and equipment decisions. Application examples using experimental data from scatterometry and critical dimension scanning electron microscope (CD-SEM) tools are used first to demonstrate how the inverse TMU analysis methodology can be used to make intelligent sampling decisions before the start of the experiment, and then to reveal why low sampling can lead to unstable and misleading results. A model is developed that can help an experimenter minimize the costs associated both with increased sampling and with making wrong decisions caused by insufficient sampling. A second cost model is described that reveals the inadequacy of current TEM (Transmission Electron Microscopy) sampling practices and the enormous costs associated with TEM sampling that is needed to provide reasonable levels of certainty in the result. These high costs reach into the tens of millions of dollars for TEM reference metrology as the measurement error budgets reach angstrom levels. The paper concludes with strategies on how to manage and mitigate these costs.

  14. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048

  15. Z-boson-exchange contributions to the luminosity measurements at LEP and c.m.s.-energy-dependent theoretical errors

    International Nuclear Information System (INIS)

    The precision of the calculation of Z-boson-exchange contributions to the luminosity measurements at LEP is studied for both the first and second generation of LEP luminosity detectors. It is shown that the theoretical errors associated with these contributions are sufficiently small so that the high-precision measurements at LEP, based on the second generation of luminosity detectors, are not limited. The same is true for the c.m.s.-energy-dependent theoretical errors of the Z line-shape formulae. (author) 19 refs.; 3 figs.; 7 tabs

  16. Measurement Error Effect on the Power of Control Chart for Doubly Truncated Normal Distribution under Standardization Procedure

    Directory of Open Access Journals (Sweden)

    Ashit B. Chakraborty

    2015-09-01

    Full Text Available Researchers in various quality control procedures consider the possibility of measurement error effect on the power of control charts as an important issue. In this paper the effect of measurement errors on the power curve of standardization procedure will be studied for doubly truncated normal distribution. A method of obtaining the expression of the power of control chart for doubly truncated normal distribution is being proposed. The effect of truncation will be shown accordingly. To study the sensitivity of the monitoring procedure, average run length ( is also considered. 

  17. Array processing——a new method to detect and correct errors on array resistivity logging tool measurements

    Institute of Scientific and Technical Information of China (English)

    Philip D.RABINOWITZ; Zhiqiang ZHOU

    2007-01-01

    In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based onthe physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.

  18. An empirical assessment of exposure measurement errors and effect attenuation in bi-pollutant epidemiologic models

    Science.gov (United States)

    Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...

  19. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    Science.gov (United States)

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  20. BIVARIATE LAGRANGE-TYPE VECTOR VALUED RATIONAL INTERPOLANTS

    Institute of Scientific and Technical Information of China (English)

    Chuan-qing Gu; Gong-qing Zhu

    2002-01-01

    An axiomatic definition to bivariate vector valued rational interpolation on distinct plane interpolation points is at first presented in this paper. A two-variable vector valued rational interpolation formula is explicitly constructed in the following form: the determinantal formulas for denominator scalar polynomials and for numerator vector polynomials,which possess Lagrange-type basic function expressions. A practical criterion of existence and uniqueness for interpolation is obtained. In contrast to the underlying method, the method of bivariate Thiele-type vector valued rational interpolation is reviewed.

  1. Error Analysis in Measured Conductivity under Low Induction Number Approximation for Electromagnetic Methods

    OpenAIRE

    George Caminha-Maciel; Irineu Figueiredo

    2013-01-01

    We present an analysis of the error involved in the so-called low induction number approximation in the electromagnetic methods. In particular, we focus on the EM34 equipment settings and field configurations, widely used for geophysical prospecting of laterally electrical conductivity anomalies and shallow targets. We show the theoretical error for the conductivity in both vertical and horizontal dipole coil configurations within the low induction number regime and up to the maximum measurin...

  2. A new multivariate measurement error model with zero-inflated dietary data, and its application to dietary assessment

    OpenAIRE

    Zhang, Saijuan; Midthune, Douglas; Guenther, Patricia M.; Krebs-Smith, Susan M; Kipnis, Victor; Dodd, Kevin W; Buckman, Dennis W.; Tooze, Janet A.; Freedman, Laurence; Carroll, Raymond J.

    2011-01-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet represents numerous foods, nutrients and other components, each of which have distinctive attributes. Sometimes, it is useful to examine intake of these components separately, but increasingly nu...

  3. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    OpenAIRE

    Ridder, de, J.; Hakvoort, W.B.J.; van Dijk; Lötters, J.C.; Boer, de, J.W.; Dimitrovova, Z.; Almeida, De

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how external vibrations affect the measurement error. A Coriolis Mass-Flow Meter (CMFM) is an active device based on the Coriolis force principle for direct mass-flow measurements, independent of fluid pr...

  4. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  5. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    Science.gov (United States)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  6. A study on fatigue measurement of operators for human error prevention in NPPs

    International Nuclear Information System (INIS)

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and management

  7. Response of residential electricity demand to price: The effect of measurement error

    International Nuclear Information System (INIS)

    In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: → Updated information on price elasticities for the US energy policy. → Taking into account measurement error in the price variable increase price elasticity. → Room for discouraging residential electricity consumption using price increases.

  8. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  9. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  10. On limit relations between some families of bivariate hypergeometric orthogonal polynomials

    International Nuclear Information System (INIS)

    In this paper we deal with limit relations between bivariate hypergeometric polynomials. We analyze the limit relation from trinomial distribution to bivariate Gaussian distribution, obtaining the limit transition from the second-order partial difference equation satisfied by bivariate hypergeometric Kravchuk polynomials to the second-order partial differential equation verified by bivariate hypergeometric Hermite polynomials. As a consequence the limit relation between both families of orthogonal polynomials is established. A similar analysis between bivariate Hahn and bivariate Appell orthogonal polynomials is also presented. (paper)

  11. Measurement errors in tipping bucket rain gauges under different rainfall intensities and their implication to hydrologic models

    Science.gov (United States)

    Measurements from tipping bucket rain gauges (TBRs) consist of systematic and random errors as an effect of external factors, such as mechanical limitations, wind effects, evaporation losses, and rainfall intensity. Two different models of TBRs, viz. ISCO-674 and TR-525 (TexasInstr., Inc.), being us...

  12. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    International Nuclear Information System (INIS)

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors. (paper)

  13. First measurement and correction of nonlinear errors in the experimental insertions of the CERN Large Hadron Collider

    Science.gov (United States)

    Maclean, E. H.; Tomás, R.; Giovannozzi, M.; Persson, T. H. B.

    2015-12-01

    Nonlinear magnetic errors in low-β insertions can contribute significantly to detuning with amplitude, linear and nonlinear chromaticity, and lead to degradation of dynamic aperture and beam lifetime. As such, the correction of nonlinear errors in the experimental insertions of colliders can be of critical significance for successful operation. This is expected to be of particular relevance to the LHC's second run and its high luminosity upgrade, as well as to future colliders such as the Future Circular Collider. Current correction strategies envisioned for these colliders assume it will be possible to calculate optimized local corrections through the insertions, using a magnetic model of the errors. This paper shows however, that reliance purely upon magnetic measurements of the nonlinear errors of insertion elements is insufficient to guarantee a good correction quality in the relevant low-β* regime. It is possible to perform beam-based examination of nonlinear magnetic errors via the feed-down to readily observed beam properties upon application of closed orbit bumps, and methods based upon feed-down to tune have been utilized at RHIC, SIS18, and SPS. This paper demonstrates the extension of such methodology to include direct observation of feed-down to linear coupling in the LHC. It is further shown that such beam-based studies can be used to complement magnetic measurements performed during LHC construction, in order to validate and refine the magnetic model of the collider. Results from first attempts of the measurement and correction of nonlinear errors in the LHC experimental insertions are presented. Several discrepancies of beam-based studies with respect to the LHC magnetic model are reported.

  14. Score Tests of Normality in Bivariate Probit Models

    OpenAIRE

    Anthony Murphy

    2005-01-01

    A relatively simple and convenient score test of normality in the bivariate probit model is derived. Monte Carlo simulations show that the small sample performance of the bootstrapped test is quite good. The test may be readily extended to testing normality in related models.

  15. The Statistical Relationship between Bivariate and Multinomial Choice Models

    OpenAIRE

    Weeks, Melvyn; Orne, Chris

    1999-01-01

    The authors demonstrate the conditions under which the bivariate probit model can be considered a special case of the more general multinomial probit model. Since the attendant parameter restrictions produce a singular covariance matrix, the subsequent problems of testing on the boundary of the parameter space are circumvented by the construction of a score test.

  16. Bivariate luminosity function of E and SO galaxies

    International Nuclear Information System (INIS)

    A function which describes the joint distribution of luminosity and radius of galaxies - the bivariate luminosity function (BLF) is defined. A simple analytical formula for the shape of BLF is proposed and fitted to the data for E and SO galaxies from the sample of a previous author. (author)

  17. A Bivariate Extension to Traditional Empirical Orthogonal Function Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Hilger, Klaus Baggesen; Andersen, Ole Baltazar;

    2002-01-01

    with 24 variables. This type of analysis can be considered as an extension of traditional empirical orthogonal function (EOF) analysis which provides a marginal analysis of one variable over time. The motivation for using a bivariate extention stems from the fact that the two fields are interrelated as...

  18. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    Science.gov (United States)

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  19. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects. PMID:25904890

  20. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  1. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...... by the antenna only within a certain region inside the visible range. Then, the truncation error is reduced by a Maxwellian continuation of the reliable portion of the spectrum: after back propagating the measured field to the antenna plane, a condition of spatial concentration of the primary field...

  2. Reduction of Truncation Errors in Planar Near-Field Aperture Antenna Measurements Using the Gerchberg-Papoulis Algorithm

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2008-01-01

    A simple and effective procedure for the reduction of truncation errors in planar near-field measurements of aperture antennas is presented. The procedure relies on the consideration that, due to the scan plane truncation, the calculated plane wave spectrum of the field radiated by the antenna is...... reliable only within a certain portion of the visible region. Accordingly, the truncation error is reduced by extrapolating the remaining portion of the visible region by the Gerchberg-Papoulis iterative algorithm, exploiting a condition of spatial concentration of the fields on the antenna aperture plane...

  3. MEASUREMENT ERROR EFFECT ON THE POWER OF THE CONTROL CHART FOR ZERO-TRUNCATED BINOMIAL DISTRIBUTION UNDER STANDARDIZATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2014-12-01

    Full Text Available Measurement error effect on the power of control charts for zero truncated Poisson distribution and ratio of two Poisson distributions are recently studied by Chakraborty and Khurshid (2013a and Chakraborty and Khurshid (2013b respectively. In this paper, in addition to the expression for the power of control chart for ZTBD based on standardized normal variate is obtained, numerical calculations are presented to see the effect of errors on the power curve. To study the sensitivity of the monitoring procedure, average run length (ARL is also considered.

  4. Estimating the U.S. Demand for Sugar in the Presence of Measurement Error in the Data

    OpenAIRE

    Uri, Noel D

    1994-01-01

    Inaccuracy in the measurement of the price data for the substitute sweeteners for sugar is a problem encountered in the estimation of the demand for sugar. Two diagnostics are introduced to assess the effect that this measurement error has on the estimated coefficients of the sugar demand relationship. The regression coefficient bounds diagnostic is used to indicate a range in which the true price responsiveness of consumers to changes in the price of sugar substitutes lies. The bias correcti...

  5. Computational method for the astral servey and the effect of measurement errors on the closed orbit distortion

    International Nuclear Information System (INIS)

    Has been developed a computational method for the astral survey procedure of the primary monuments that consists in the measurements of short chords and perpendicular distances. This method can be applied to any astral polygon with the lengths of chords and vertical angles different from each other. We will study the propagation of measurement errors for KEK-PF storage ring, and also examine its effect on the closed orbit distortion. (author)

  6. Measurements of setup error and physiological movement of liver by using electronic portal imaging device in patients with hepatocellular carcinoma

    International Nuclear Information System (INIS)

    The goal of this study was to improve the accuracy of three-dimensional conformal radio-therapy (3-D CRT) by measuring the treatment setup error and physiological movement of liver based on the analysis of images which were obtained by Electronic Portal Imaging Device (EPID). For 10 patients with hepatocellular carcinoma, 4-7 portal images were obtained by using EPID during the radiotherapy from each patient daily. We analyzed the setup error and physiological movement of liver based on the verification data. We also determined the safety margin of the tumor in 3-D CRT through the analysis of physiological movement. The setup errors were measured as 3 mm with standard deviation 1.70 mm in x direction and 3.7 mm with standard deviation 1.88 mm in y direction respectively. Hence, deviation were smaller than 5 mm from the center of each axis. The measured range of liver movement due to the physiological motion was 8.63 mm on the average. Considering the motion of liver and setup error, the safety margin of tumor was at least 15 mm. EPID is a very useful device for the determination of the optimal margin of the tumor, and thus enhance the accuracy and stability of the 3-D CRT in patients with hepatocellular carcinoma

  7. Correction of error in two-dimensional wear measurements of cemented hip arthroplasties

    NARCIS (Netherlands)

    The, Bertram; Mol, Linda; Diercks, Ron L.; van Ooijen, Peter M. A.; Verdonschot, Nico

    2006-01-01

    The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We

  8. Measuring Syntactic Growth: Errors and Expectations in Sentence-Combining Practice with College Freshmen.

    Science.gov (United States)

    Maimon, Elaine P.; Nodine, Barbara F.

    1978-01-01

    Demonstrated effectiveness in eliciting growth in number of words per T-unit through the use of sentence-combining exercises in a college composition course and related frequency of embedding errors to data on length of T-unit. (DD)

  9. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  10. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S;

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined in the...

  11. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    International Nuclear Information System (INIS)

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  12. Measuring and Correcting Wind-Induced Pointing Errors of the Green Bank Telescope Using an Optical Quadrant Detector

    CERN Document Server

    Ries, Paul; Constantikes, Kim T; Brandt, Joseph J; Ghigo, Frank D; Mason, Brian S; Prestage, Richard M; Ray, Jason; Schwab, Frederic R

    2011-01-01

    Wind-induced pointing errors are a serious concern for large-aperture high-frequency radio telescopes. In this paper, we describe the implementation of an optical quadrant detector instrument that can detect and provide a correction signal for wind-induced pointing errors on the 100m diameter Green Bank Telescope (GBT). The instrument was calibrated using a combination of astronomical measurements and metrology. We find that the main wind-induced pointing errors on time scales of minutes are caused by the feedarm being blown along the direction of the wind vector. We also find that wind-induced structural excitation is virtually non-existent. We have implemented offline software to apply pointing corrections to the data from imaging instruments such as the MUSTANG 3.3 mm bolometer array, which can recover ~70% of sensitivity lost due to wind-induced pointing errors. We have also performed preliminary tests that show great promise for correcting these pointing errors in real-time using the telescope's subrefle...

  13. THEOREMS OF PEANO'S TYPE FOR BIVARIATE FUNCTIONS AND OPTIMAL RECOVERY OF LINEAR FUNCTIONALS

    Institute of Scientific and Technical Information of China (English)

    N.K. Dicheva

    2001-01-01

    The best recovery of a linear functional Lf , f =f (x,y), on thebasis of given linear functionals Ljf ,j=1,2, … ,N in a sense of Sard has been investigated, using analogy of Peano's theorem. The best recovery of a bivariate function by given scattered data has been obtained in a simple analytical form as a special case.CLC Number:O17 Document ID:AAuthor Resume:Natasha K. Dicheva ,e-mail: dichevan_fgs@uacg, acad. bg References:[1]Rudin,W. ,Principles of Mathematical Analysis,2ed. ,McGraw-Hill Book Co. ,New York,1964.[2]Rudin,W. ,Real and Complex Analysis,McGraw-Hill publishing Co. ,New York,1976.[3]Hewitt,E. and Stromberg,K. ,Real and Abstract Analysis,Springer-Verlag,New York,Berlin,1965.[4]Lusternik,L. and Sobolev,V. ,Elements of the Functional Analysis,Izd. Nauka,Moskva,1965 (in Russian).[5]Sard,A.,Integral Representation of Remainders Duke Math. J.,15(1948),333-345.[6]Sard,A. ,Linear Approximation,Amer. Math. Soc. ,Math. Surverys,9,1963.[7]Smolyak,S.A. ,On the optimal reconvery of Functions and Functionals of Them,Ph. D. Thesis,Moscow State University,1965.[8]Nielson,G.,Bivariate Spline Functions and the Approximation of Linear Functionals,Numer.Math.,21(1973),138-160.[9]Mansfield,L.E. ,Optimal Approximations and Error Bounds in Spaces of Bivariate Functions,J. Approx. Theory 5(1972),77-96.[10]Mansfield,L.E. ,On the Optimal Approximation of Linear Functionals in Spaces of Bivariate Functions,SIAM J. Numer. Anal ,8(1971),115-126.[11]Ritter,D. ,Two Dimensional Spline Functions and best Approximation of Linear Functionals,J. Approx. Theory,3(1970),352-368.[12]Laurent,P.J. ,Approximation et optimisation,Hermann,Paris,1972.[13]Bojanov,B. ,Hakopian,H.A. and Sahakian,A.A. ,Spline Functions and Multivariate Interpolations,Kluwer Academic Publishers,Dordrecht,1993.[14]Dicheve,N.K.,On the best Recovery of Linear Functional and its Applications,Boundary Elements XXI,eds. C.A. Brebbia and H. Power,WIT Press,Southampton,Boston,(1999),739-747.Manuscript Received

  14. A conditional bivariate reference curve with an application to human growth

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation......conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation...

  15. Markov Chain Beam Randomization: a study of the impact of PLANCK beam measurement errors on cosmological parameter estimation

    CERN Document Server

    Rocha, G; Górski, K M; Huffenberger, K M; Lawrence, C R; Lange, A E

    2009-01-01

    We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, which we call Markov Chain Beam Randomization, MCBR, randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic `nuisance' parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parame...

  16. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Science.gov (United States)

    Dichev, D.; Koev, H.; Bakalova, T.; Louda, P.

    2014-08-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  17. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Directory of Open Access Journals (Sweden)

    Dichev D.

    2014-08-01

    Full Text Available The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  18. Modeling Elicitation effects in contingent valuation studies: a Monte Carlo Analysis of the bivariate approach

    OpenAIRE

    Genius, Margarita; Strazzera, Elisabetta

    2005-01-01

    A Monte Carlo analysis is conducted to assess the validity of the bivariate modeling approach for detection and correction of different forms of elicitation effects in Double Bound Contingent Valuation data. Alternative univariate and bivariate models are applied to several simulated data sets, each one characterized by a specific elicitation effect, and their performance is assessed using standard selection criteria. The bivariate models include the standard Bivariate Probit model, and an al...

  19. Measurement error in a burrow index to monitor relative population size in the common vole

    Czech Academy of Sciences Publication Activity Database

    Lisická, L.; Losík, J.; Zejda, Jan; Heroldová, Marta; Nesvadbová, Jiřina; Tkadlec, Emil

    2007-01-01

    Roč. 56, č. 2 (2007), s. 169-176. ISSN 0139-7893 R&D Projects: GA ČR GA206/04/2003 Institutional research plan: CEZ:AV0Z60930519 Keywords : bias * colonisation * dispersion * Microtus arvalis * precision * sampling error Subject RIV: EH - Ecology, Behaviour Impact factor: 0.376, year: 2007 http://www.ivb.cz/folia/56/2/169-176_MS1293.pdf

  20. Bias and spread in extreme value theory measurements of probability of error

    Science.gov (United States)

    Smith, J. G.

    1972-01-01

    Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.