WorldWideScience

Sample records for bivariate measurement error

  1. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  2. Test and asymptotic normality for mixed bivariate measure

    Directory of Open Access Journals (Sweden)

    Rachid Sabre

    2013-05-01

    Full Text Available Consider a pair of random variables whose joint probability measure is the sum of an absolutely continuous measure, a discrete measure and a finite number of absolutely continuous measures on some lines called jum lines. The central limit theorem of the densities estimates is studied and its rate of convergence is given. A statistical test is developed to locate the jump points. An application on real data was conducted.

  3. A Simple Approximation for Bivariate Normal Integral Based on Error Function and its Application on Probit Model with Binary Endogenous Regressor

    OpenAIRE

    Wen-Jen Tsay; Peng-Hsuan Ke

    2009-01-01

    A simple approximation for the bivariate normal cumulative distribution function (BNCDF) based on the error function is derived. The worst error of our method is found to four decimal places under various configurations considered in this paper's Table 1. This finding is much better than that in Table 1 of Cox and Wermuth (1991) and in Table 1 of Lin (1995) where the worst error of both tables is up to 3 decimal places. We also apply the proposed method to approximate the likelihood function ...

  4. Job Mobility and Measurement Error

    OpenAIRE

    Bergin, Adele

    2011-01-01

    This thesis consists of essays investigating job mobility and measurement error. Job mobility, captured here as a change of employer, is a striking feature of the labour market. In empirical work on job mobility, researchers often depend on self-reported tenure data to identify job changes. There may be measurement error in these responses and consequently observations may be misclassified as job changes when truly no change has taken place and vice versa. These observations serve as a starti...

  5. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  6. Measuring verification device error rates

    International Nuclear Information System (INIS)

    A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix

  7. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  8. Errors in airborne flux measurements

    Science.gov (United States)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  9. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  10. POTASSIUM MEASUREMENT: CAUSES OF ERRORS IN MEASUREMENT

    OpenAIRE

    Kavitha; Omprakash

    2014-01-01

    It is not a easy task to recognize the errors in potassium measurement in the lab. Falsely elevated potassium levels if goes unrecognized by the lab and clinician, it is difficult to treat masked hypokalemic state, which is again a medical emergency. Such cases require proper monitoring by the clinician, so that cases with such history of pseudohyperkalemia which cannot be easily identified in the laboratory should not go unrecognized by clinician. The aim of this article is t...

  11. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  12. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  13. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  14. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species...

  15. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  16. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  17. Bivariate discrete Linnik distribution

    Directory of Open Access Journals (Sweden)

    Davis Antony Mundassery

    2014-10-01

    Full Text Available Christoph and Schreiber (1998a studied the discrete analogue of positive Linnik distribution and obtained its characterizations using survival function. In this paper, we introduce a bivariate form of the discrete Linnik distribution and study its distributional properties. Characterizations of the bivariate distribution are obtained using compounding schemes. Autoregressive processes are developed with marginals follow the bivariate discrete Linnik distribution.

  18. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations cons

  19. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations cons

  20. Bivariate analysis of basal serum anti-Mullerian hormone measurements and human blastocyst development after IVF

    LENUS (Irish Health Repository)

    Sills, E Scott

    2011-12-02

    Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH) measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79) and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+\\/- SD) age for study patients in this study group was 36.3 ± 4.0 (range = 28-45) yrs, and mean (+\\/- SD) terminal serum estradiol during IVF was 5929 +\\/- 4056 pmol\\/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65) was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44) was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61), suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol\\/l; p = 0.029). Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  1. Prediction with measurement errors in finite populations.

    Science.gov (United States)

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2012-02-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.

  2. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  3. Measurement error in longitudinal film badge data

    CERN Document Server

    Marsh, J L

    2002-01-01

    Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...

  4. Errors of measurement by laser goniometer

    Science.gov (United States)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  5. Methodological errors in radioisotope flux measurements

    Energy Technology Data Exchange (ETDEWEB)

    Egnor, R.W.; Vaccarezza, S.G.; Charney, A.N. (New York Univ. School of Medicine, New York (USA))

    1988-11-01

    The authors examined several sources of error in isotopic flux measurements in a commonly used experimental model: the study of {sup 22}Na and {sup 36}Cl fluxes across rat ileal tissue mounted in the Ussing flux chamber. The experiment revealed three important sources of error: the absolute counts per minute, the difference in counts per minute between serial samples, and averaging of serial samples. By computer manipulation, they then applied hypothetical changes in the experimental protocol to generalize these findings and assess the effect and interaction of the absolute counts per minute, the sampling interval, and the counting time on the magnitude of the error. They found that the error of a flux measurement will vary inversely with the counting time and the difference between the consecutive sample counts per minute used in the flux calculations and will vary directly with the absolute counts per minute of each sample. Alteration of the hot side specific activity, the surface area of the tissue across which flux is measured and the sample volume have a smaller impact on measurement error. Experimental protocols should be designed with these methodological considerations in mind to minimize the error inherent in measuring isotope flux.

  6. Statistical error analysis of reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Thammaluckan, Sithisak; Hah, Chang Joo [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    After statistical analysis, it was confirmed that each group were sampled from same population. It is observed in Table 7 that the mean error decreases as core size increases. Application of bias factor obtained from this research reduces mean error further. The point kinetic model had been used to measure control rod worth without 3D spatial information of neutron flux or power distribution, which causes inaccurate result. Dynamic Control rod Reactivity Measurement (DCRM) was employed to take into account of 3D spatial information of flux in the point kinetics model. The measured bank worth probably contains some uncertainty such as methodology uncertainty and measurement uncertainty. Those uncertainties may varies with size of core and magnitude of reactivity. The goal of this research is to investigate the effect of core size and magnitude of control rod worth on the error of reactivity measurement using statistics.

  7. Neutron multiplication error in TRU waste measurements

    Energy Technology Data Exchange (ETDEWEB)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  8. Conditional Density Estimation in Measurement Error Problems.

    Science.gov (United States)

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  9. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... around zero and thicker tails than a normal distribution. In a linear regression model where the explanatory variable is measured with error it is well-known that this gives a downward bias in the absolute value of the corresponding regression parameter (attenuation), Friedman (1957). In non......-linear models it is more difficult to obtain an expression for the bias as it depends on the distribution of the true underlying variable as well as the error distribution. Chesher (1991) give some approximations to very general non-linear models and Stefanski & Carroll (1985) in the logistic regression model...

  10. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  11. Improving Localization Accuracy: Successive Measurements Error Modeling.

    Science.gov (United States)

    Ali, Najah Abu; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  12. Multiple Indicators, Multiple Causes Measurement Error Models

    OpenAIRE

    Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.

    2014-01-01

    Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow bot...

  13. Measurement Error in Access to Markets

    OpenAIRE

    Javier Escobal; Sonia Laszlo

    2005-01-01

    Studies in the microeconometric literature increasingly utilize distance to or time to reach markets or social services as determinants of economic issues. These studies typically use self-reported measures from survey data, often characterized by non-classical measurement error. This paper is the first validation study of access to markets data. New and unique data from Peru allow comparison of self-reported variables with scientifically calculated variables. We investigate the determinants ...

  14. Measurement Uncertainty Evaluation of Digital Modulation Quality Parameters: Magnitude Error and Phase Error

    Directory of Open Access Journals (Sweden)

    Zhan Zhiqiang

    2016-01-01

    Full Text Available In digital modulation quality parameters traceability, the Error Vector Magnitude, Magnitude Error and Phase Error must be traced, and the measurement uncertainty of the above parameters needs to be assessed. Although the calibration specification JJF1128-2004 Calibration Specification for Vector Signal Analyzers is published domestically, the measurement uncertainty evaluation is unreasonable, the parameters selected is incorrect, and not all error terms are selected in measurement uncertainty evaluation. This article lists formula about magnitude error and phase error, than presents the measurement uncertainty evaluation processes for magnitude error and phase errors.

  15. Measurement error in longitudinal film badge data

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, J.L

    2002-04-01

    The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study

  16. Measurement Error and Misclassification in Statistics and

    CERN Document Server

    Gustafson, Paul

    2003-01-01

    This book addresses statistical challenges posed by inaccurately measuring explanatory variables, a common problem in biostatistics and epidemiology. The author explores both measurement error in continuous variables and misclassification in categorical variables. He also describes the circumstances in which it is necessary to explicitly adjust for imprecise covariates using the Bayesian approach and a Markov chain Monte Carlo algorithm. The book offers a mix of basic and more specialized topics such as ...wrong-model... fitting. Mathematical details are featured in the final sections of each

  17. Ordinal Bivariate Inequality

    DEFF Research Database (Denmark)

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and...

  18. Laser measurement and analysis of reposition error in polishing systems

    Science.gov (United States)

    Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying

    2015-10-01

    In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.

  19. Bivariate value-at-risk

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2007-10-01

    Full Text Available In this paper we extend the concept of Value-at-risk (VaR to bivariate return distributions in order to obtain measures of the market risk of an asset taking into account additional features linked to downside risk exposure. We first present a general definition of risk as the probability of an adverse event over a random distribution and we then introduce a measure of market risk (b-VaR that admits the traditional b of an asset in portfolio management as a special case when asset returns are normally distributed. Empirical evidences are provided by using Italian stock market data.

  20. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  1. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2016-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  2. Median Unbiased Estimation of Bivariate Predictive Regression Models with Heavy-tailed or Heteroscedastic Errors%具有重尾或异方差误差的双变量预测回归模型的中位无偏估计

    Institute of Scientific and Technical Information of China (English)

    朱复康; 王德军

    2007-01-01

    In this paper, we consider median unbiased estimation of bivariate predictive regression models with non-normal, heavy-tailed or heterescedastic errors. We construct confidence intervals and median unbiased estimator for the parameter of interest. We show that the proposed estimator has better predictive potential than the usual least squares estimator via simulation. An empirical application to finance is given. And a possible extension of the estimation procedure to cointegration models is also described.

  3. Bivariate analysis of basal serum anti-Müllerian hormone measurements and human blastocyst development after IVF

    Directory of Open Access Journals (Sweden)

    Sills E Scott

    2011-12-01

    Full Text Available Abstract Background To report on relationships among baseline serum anti-Müllerian hormone (AMH measurements, blastocyst development and other selected embryology parameters observed in non-donor oocyte IVF cycles. Methods Pre-treatment AMH was measured in patients undergoing IVF (n = 79 and retrospectively correlated to in vitro embryo development noted during culture. Results Mean (+/- SD age for study patients in this study group was 36.3 ± 4.0 (range = 28-45 yrs, and mean (+/- SD terminal serum estradiol during IVF was 5929 +/- 4056 pmol/l. A moderate positive correlation (0.49; 95% CI 0.31 to 0.65 was noted between basal serum AMH and number of MII oocytes retrieved. Similarly, a moderate positive correlation (0.44 was observed between serum AMH and number of early cleavage-stage embryos (95% CI 0.24 to 0.61, suggesting a relationship between serum AMH and embryo development in IVF. Of note, serum AMH levels at baseline were significantly different for patients who did and did not undergo blastocyst transfer (15.6 vs. 10.9 pmol/l; p = 0.029. Conclusions While serum AMH has found increasing application as a predictor of ovarian reserve for patients prior to IVF, its roles to estimate in vitro embryo morphology and potential to advance to blastocyst stage have not been extensively investigated. These data suggest that baseline serum AMH determinations can help forecast blastocyst developmental during IVF. Serum AMH measured before treatment may assist patients, clinicians and embryologists as scheduling of embryo transfer is outlined. Additional studies are needed to confirm these correlations and to better define the role of baseline serum AMH level in the prediction of blastocyst formation.

  4. Error measurement and assemble error correction of a 3D-step-gauge

    Institute of Scientific and Technical Information of China (English)

    MAO Xinyong; LI Bin; SHI Hanmin; LIU Hongqi; LI Xi; LI Peigen

    2007-01-01

    A new artifact called 3D-step-gauge consisting of a pyramid array and a compound, is proposed to calculate geometrical errors for machine tools. Only one point on each profile of the pyramid in the array is probed, and its center coordinate can be calculated. Then, the intervals of the pyramids can be transfered a length standard to measure errors.Considering the differences in the structural parameters and the location of the pyramids, a volumetric error measurement method of discrete points is presented. Furthermore, the location errors between the calibration state and the measurement state are studied and their influences on the accuracy of the position measurement are investigated on an actual machine tool. The 3D-step-gauge was tested on an actual machine tool and the measurement result shows quick assembly,convenient measurement, and high accuracy.

  5. A Bayesian semiparametric model for bivariate sparse longitudinal data.

    Science.gov (United States)

    Das, Kiranmoy; Li, Runze; Sengupta, Subhajit; Wu, Rongling

    2013-09-30

    Mixed-effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model-based approach and estimate the error-covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random-effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. PMID:23553747

  6. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  7. Triphasic MRI of pelvic organ descent: sources of measurement error

    Energy Technology Data Exchange (ETDEWEB)

    Morren, Geert L. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)]. E-mail: geert_morren@hotmail.com; Balasingam, Adrian G. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Wells, J. Elisabeth [Department of Public Health and General Medicine, Christchurch School of Medicine, St. Elmo Courts, Christchurch (New Zealand); Hunter, Anne M. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Coates, Richard H. [Christchurch Radiology Group, P.O. Box 21107, 4th Floor, Leicester House, 291 Madras Street, Christchurch (New Zealand); Perry, Richard E. [Bowel and Digestion Centre, The Oxford Clinic, 38 Oxford Terrace, Christchurch (New Zealand)

    2005-05-01

    Purpose: To identify sources of error when measuring pelvic organ displacement during straining using triphasic dynamic magnetic resonance imaging (MRI). Materials and methods: Ten healthy nulliparous woman underwent triphasic dynamic 1.5 T pelvic MRI twice with 1 week between studies. The bladder was filled with 200 ml of a saline solution, the vagina and rectum were opacified with ultrasound gel. T2 weighted images in the sagittal plane were analysed twice by each of the two observers in a blinded fashion. Horizontal and vertical displacement of the bladder neck, bladder base, introitus vaginae, posterior fornix, cul-de sac, pouch of Douglas, anterior rectal wall, anorectal junction and change of the vaginal axis were measured eight times in each volunteer (two images, each read twice by two observers). Variance components were calculated for subject, observer, week, interactions of these three factors, and pure error. An overall standard error of measurement was calculated for a single observation by one observer on a film from one woman at one visit. Results: For the majority of anatomical reference points, the range of displacements measured was wide and the overall measurement error was large. Intra-observer error and week-to-week variation within a subject were important sources of measurement error. Conclusion: Important sources of measurement error when using triphasic dynamic MRI to measure pelvic organ displacement during straining were identified. Recommendations to minimize those errors are made.

  8. Quantum Estimation Theory of Error and Disturbance in Quantum Measurement

    OpenAIRE

    Watanabe, Yu; Ueda, Masahito

    2011-01-01

    We formulate the error and disturbance in quantum measurement by invoking quantum estimation theory. The disturbance formulated here characterizes the non-unitary state change caused by the measurement. We prove that the product of the error and disturbance is bounded from below by the commutator of the observables. We also find the attainable bound of the product.

  9. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  10. A measurement error model for microarray data analysis

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yiming; CHENG Jing

    2005-01-01

    Microarray technology has been widely used to analyze the gene expression levels by detecting fluorescence intensity in a high throughput fashion. However, since the measurement error produced from various sources in microarray experiments is heterogeneous and too large to be ignored, we propose here a measurement error model for microarray data processing, by which the standard deviation of the measurement error is demonstrated to be linearly increased with fluorescence intensity. A robust algorithm, which estimates the parameters of the measurement error model from a single microarray without replicated spots, is provided. The model and algorithm for estimating of the parameters from a given data set are tested on both the real data set and the simulated data set, and the result has been proven satisfactory. And, combining the measurement error model with traditional Z-test method, a full statistical model has been developed. It can significantly improve the statistical inference for identifying differentially expressed genes.

  11. The error analysis and online measurement of linear slide motion error in machine tools

    Science.gov (United States)

    Su, H.; Hong, M. S.; Li, Z. J.; Wei, Y. L.; Xiong, S. B.

    2002-06-01

    A new accurate two-probe time domain method is put forward to measure the straight-going component motion error in machine tools. The characteristics of non-periodic and non-closing in the straightness profile error are liable to bring about higher-order harmonic component distortion in the measurement results. However, this distortion can be avoided by the new accurate two-probe time domain method through the symmetry continuation algorithm, uniformity and least squares method. The harmonic suppression is analysed in detail through modern control theory. Both the straight-going component motion error in machine tools and the profile error in a workpiece that is manufactured on this machine can be measured at the same time. All of this information is available to diagnose the origin of faults in machine tools. The analysis result is proved to be correct through experiment.

  12. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    CERN Document Server

    Crandall, Sara

    2015-01-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. 2014 that give an LMC distance modulus of (m-M)_{0}=18.49 \\pm 0.13 (median and 1\\sigma symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian --- flatter and broader than Gaussian --- with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian --- more peaked than Gaussian --- with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements.

  13. Sampling errors in rainfall measurements by weather radar

    OpenAIRE

    Piccolo, F.; G. B. Chirico

    2005-01-01

    International audience Radar rainfall data are affected by several types of error. Beside the error in the measurement of the rainfall reflectivity and its transformation into rainfall intensity, random errors can be generated by the temporal spacing of the radar scans. The aim of this work is to analize the sensitivity of the estimated rainfall maps to the radar sampling interval, i.e. the time interval between two consecutive radar scans. This analysis has been performed employing data c...

  14. ERROR COMPENSATION OF COORDINATE MEASURING MACHINES WITH LOW STIFFNESS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A technique for compensating the errors of coordinate measuring machines (CMMs) with low stiffness is proposed. Some additional it ems related with the force deformation are introduced to the error compensation equations. The research was carried on a moving column horizontal arm CMM. Experimental results show that both the effects of systematic components of error motions and force deformations are greatly reduced, which shows the effectiveness o proposed technique.

  15. System Measures Errors Between Time-Code Signals

    Science.gov (United States)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  16. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  17. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  18. Method for online measurement of optical current transformer onsite errors

    International Nuclear Information System (INIS)

    This paper describes a method for the online measurement of an optical current transformer (OCT) onsite errors comparing with a conventional electromagnetic current transformer (CT) as the reference transformer. The OCT under measurement is connected in series with the reference electromagnetic CT in the same line bay. The secondary output signals of the OCT and the electromagnetic CT are simultaneously collected and processed using a digital signal processing technique. The tests developed on a prototype clearly indicate that the method is very suitable for measuring errors of the OCT onsite without an interruption in the service. The onsite error characteristics of the OCT are analyzed, as well as the stability and repeatability. (paper)

  19. Measurement errors in cirrus cloud microphysical properties

    Directory of Open Access Journals (Sweden)

    H. Larsen

    Full Text Available The limited accuracy of current cloud microphysics sensors used in cirrus cloud studies imposes limitations on the use of the data to examine the cloud's broadband radiative behaviour, an important element of the global energy balance. We review the limitations of the instruments, PMS probes, most widely used for measuring the microphysical structure of cirrus clouds and show the effect of these limitations on descriptions of the cloud radiative properties. The analysis is applied to measurements made as part of the European Cloud and Radiation Experiment (EUCREX to determine mid-latitude cirrus microphysical and radiative properties.

    Key words. Atmospheric composition and structure (cloud physics and chemistry · Meteorology and atmospheric dynamics · Radiative processes · Instruments and techniques

  20. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  1. Reduction of statistic error in Mihalczo subcriticality measurement

    Energy Technology Data Exchange (ETDEWEB)

    Hazama, Taira [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-08-01

    The theoretical formula for the statistical error estimation in Mihalczo method was derived, and the dependence of the error were investigated on the facility to be measured and on the parameter in the data analysis. The formula was derived based on the reactor noise theory and the error theory for the frequency analysis, and found that the error depends on such parameters as the prompt neutron decay constant, detector efficiencies, and the frequency bandwidth. Statistical errors estimated with the formula was compared with experimental values and verified to be reasonable. Through parameter surveys, it is found that there is an optimum combination of the parameters to reduce the magnitude of the errors. In the experiment performed in DCA subcriticality measurement facility, it is estimated experimentally that the measurement requires 20 minutes to obtain the statistic error of 1% for the keff 0.9. According to the error theory, this might be reduced to 3 seconds in the aqueous fuel system typical in fuel reprocessing plant. (J.P.N.)

  2. ASSESSING THE DYNAMIC ERRORS OF COORDINATE MEASURING MACHINES

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The main factors affecting the dynamic errors of coordinate measuring machines are analyzed. It is pointed out that there are two main contributors to the dynamic errors: One is the rotation of the elements around the joints connected with air bearings and the other is the bending of the elements caused by the dynamic inertial forces. A method for obtaining the displacement errors at the probe position from dynamic rotational errors is presented. The dynamic rotational errors are measured with inductive position sensors and a laser interferometer. The theoretical and experimental results both show that during the process of fast probing, due to the dynamic inertial forces, there are not only large rotation of the elements around the joints connected with air bearings but also large bending of the weak elements themselves.

  3. An introduction to the measurement errors and data handling

    International Nuclear Information System (INIS)

    Some usual methods to estimate and correlate measurement errors are presented. An introduction to the theory of parameter determination and goodness of the estimates is also presented. Some examples are discussed. (author)

  4. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  5. Error tolerance of topological codes with independent bit-flip and measurement errors

    Science.gov (United States)

    Andrist, Ruben S.; Katzgraber, Helmut G.; Bombin, H.; Martin-Delgado, M. A.

    2016-07-01

    Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011), 10.1088/1367-2630/13/8/083006.

  6. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  7. Measuring worst-case errors in a robot workcell

    Energy Technology Data Exchange (ETDEWEB)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  8. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  9. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  10. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  11. Measurement uncertainty evaluation of conicity error inspected on CMM

    Science.gov (United States)

    Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang

    2016-01-01

    The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.

  12. Dyadic Bivariate Wavelet Multipliers in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhong Yan LI; Xian Liang SHI

    2011-01-01

    The single 2 dilation wavelet multipliers in one-dimensional case and single A-dilation (where A is any expansive matrix with integer entries and |detA|=2)wavelet multipliers in twodimensional case were completely characterized by Wutam Consortium(1998)and Li Z.,et al.(2010).But there exist no results on multivariate wavelet multipliers corresponding to integer expansive dilation.matrix with the absolute value of determinant not 2 in L2(R2).In this paper,we choose 2I2=(0202)as the dilation matrix and consider the 2I2-dilation multivariate wavelet Ψ={ψ1,ψ2,ψ3}(which is called a dyadic bivariate wavelet)multipliers.Here we call a measurable function family f={f1,f2,f3}a dyadic bivariate wavelet multiplier if Ψ1={F-1(f1ψ1),F-1(f2ψ2),F-1(f3ψ3)} is a dyadic bivariate wavelet for any dyadic bivariate wavelet Ψ={ψ1,ψ2,ψ3},where(f)and,F-1 denote the Fourier transform and the inverse transform of function f respectively.We study dyadic bivariate wavelet multipliers,and give some conditions for dyadic bivariate wavelet multipliers.We also give concrete forms of linear phases of dyadic MRA bivariate wavelets.

  13. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  14. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. PMID:27566773

  15. Beam induced vacuum measurement error in BEPC II

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    When the beam in BEPCII storage ring aborts suddenly, the measured pressure of cold cathode gauges and ion pumps will drop suddenly and decrease to the base pressure gradually. This shows that there is a beam induced positive error in the pressure measurement during beam operation. The error is the difference between measured and real pressures. Right after the beam aborts, the error will disappear immediately and the measured pressure will then be equal to real pressure. For one gauge, we can fit a non-linear pressure-time curve with its measured pressure data 20 seconds after a sudden beam abortion. From this negative exponential decay pumping-down curve, real pressure at the time when the beam starts aborting is extrapolated. With the data of several sudden beam abortions we have got the errors of that gauge in different beam currents and found that the error is directly proportional to the beam current, as expected. And a linear data-fitting gives the proportion coefficient of the equation, which we derived to evaluate the real pressure all the time when the beam with varied currents is on.

  16. Phase measurement error in summation of electron holography series

    Energy Technology Data Exchange (ETDEWEB)

    McLeod, Robert A., E-mail: robbmcleod@gmail.com [Department of Physics, University of Alberta, Edmonton, AB, Canada T6G 2E1 (Canada); National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Bergen, Michael [National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Malac, Marek [National Institute for Nanotechnology, 11421 Saskatchewan Dr., Edmonton, AB, Canada T6G 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, AB, Canada T6G 2E1 (Canada)

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS{sub 2} fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and −5% in the vacuum, indicating that the model can provide reliable quantitative predictions. - Highlights: • Optimization of electro-optical configuration for double biprism holography. • Model of drift

  17. Measurement error of waist circumference: Gaps in knowledge

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.; Mechelen, W.V. van

    2013-01-01

    Objective It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design To identif

  18. ALGORITHM FOR SPHERICITY ERROR AND THE NUMBER OF MEASURED POINTS

    Institute of Scientific and Technical Information of China (English)

    HE Gaiyun; WANG Taiyong; ZHAO Jian; YU Baoqin; LI Guoqin

    2006-01-01

    The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution,whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.

  19. Measurement error of waist circumference: gaps in knowledge.

    NARCIS (Netherlands)

    Verweij, L.M.; Terwee, C.B.; Proper, K.I.; Hulshof, C.T.J.; Mechelen, W. van

    2013-01-01

    Objective: It is not clear whether measuring waist circumference in clinical practice is problematic because the measurement error is unclear, as well as what constitutes a clinically relevant change. The present study aimed to summarize what is known from state-of-the-art research. Design: To ident

  20. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  1. Automated High Resolution Measurement of Heliostat Slope Errors

    OpenAIRE

    Ulmer, Steffen; März, Tobias; Reinalter, Wolfgang; Belhomme, Boris

    2010-01-01

    A new optical measurement method that simplifies and optimizes the mounting and canting of heliostats and helps to assure their optical quality before commissioning of the solar field was developed. This method is based on the reflection of regular patterns in the mirror surface and their distortions due to mirror surface errors. The measurement has a resolution of about one million points per heliostat with a measurement uncertainty of less than 0.2 mrad and a measurement time of about one m...

  2. AUTOMATED HIGH RESOLUTION MEASUREMENT OF HELIOSTAT SLOPE ERRORS

    OpenAIRE

    Ulmer, Steffen; März, Tobias; Prahl, Christoph; Reinalter, Wolfgang; Belhomme, Boris

    2009-01-01

    A new optical measurement method that simplifies and optimizes the mounting and canting of heliostats and helps to assure their optical quality before commissioning of the solar field was developed. This method is based on the reflection of regular patterns in the mirror surface and their distortions due to mirror surface errors. The measurement has a resolution of about one million points per heliostat with a measurement uncertainty of less than 0.2 mrad and a measurement time of about one m...

  3. Errors

    International Nuclear Information System (INIS)

    Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)

  4. Estimation of discretization errors in contact pressure measurements.

    Science.gov (United States)

    Fregly, Benjamin J; Sawyer, W Gregory

    2003-04-01

    Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352

  5. The effect of measurement error on surveillance metrics

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Brian Phillip [Los Alamos National Laboratory; Hamada, Michael S. [Los Alamos National Laboratory

    2012-04-24

    The purpose of this manuscript is to describe different simulation studies that CCS-6 has performed for the purpose of understanding the effects of measurement error on the surveillance metrics. We assume that the measured items come from a larger population of items. We denote the random variable associate with an item's value of an attribute of interest as X and that X {approx} N({mu}, {sigma}{sup 2}). This distribution represents the variability in the population of interest and we wish to make inference on the parameters {mu} and {sigma} or on some function of these parameters. When an item X is selected from the larger population, a measurement is made on some attribute of it. This measurement is made with error and the true value of X is not observed. The rest of this section presents simulation results for different measurement cases encountered.

  6. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  7. Bayesian conformity assessment in presence of systematic measurement errors

    Science.gov (United States)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  8. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    This article concerns satellite interferometric radar measurements of ice elevation and three-dimensional flow vectors. It describes sensitivity to (1) atmospheric path length changes, and other phase distortions, (2) violations of the stationary flow assumption, and (3) unknown vertical velocities...... and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  9. The bivariate current status model

    OpenAIRE

    Groeneboom, P.

    2013-01-01

    For the univariate current status and, more generally, the interval censoring model, distribution theory has been developed for the maximum likelihood estimator (MLE) and smoothed maximum likelihood estimator (SMLE) of the unknown distribution function, see, e.g., [12], [7], [4], [5], [6], [10], [11] and [8]. For the bivariate current status and interval censoring models distribution theory of this type is still absent and even the rate at which we can expect reasonable estimators to converge...

  10. Improving GDP measurement: a measurement-error perspective

    OpenAIRE

    S. Boragan Aruoba; Francis X. Diebold; Jeremy J. Nalewaik; Frank Schorfheide; Dongho Song

    2013-01-01

    We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties -- particularly as regards serial correlation -- differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP.

  11. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  12. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serious...... impact on the results when the instrumental measurements are used for multivariate regression and prediction. This paper gives examples of how errors influencing the predictions obtained by a multivariate regression model can be quantified and handled. Only random errors are considered here, while in...... certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  13. Generalized Symmetric Divergence Measures and the Probability of Error

    CERN Document Server

    Taneja, Inder Jeet

    2011-01-01

    There are three classical divergence measures exist in the literature on information theory and statistics. These are namely, Jeffryes-Kullback-Leiber J-divergence. Sibson-Burbea-Rao Jensen-Shannon divegernce and Taneja Arithemtic-Geometric divergence}. These three measures bear an interesting relationship among each other. The divergence measures like Hellinger discrimination, symmetric chi-square divergence, and triangular discrimination are also known in the literature. In this paper our aim is to give connections of generalized divergences of J-divergence and Jensen-Shannon divergence with probability of error.

  14. Correcting for measurement error in latent variables used as predictors

    OpenAIRE

    Schofield, Lynne Steuerle

    2015-01-01

    This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory, is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I fi...

  15. The error budget of the Dark Flow measurement

    OpenAIRE

    Atrio-Barandela, F.; Kashlinsky, A.; Ebeling, H.; Kocevski, D.; Edge, A.

    2010-01-01

    We analyze the uncertainties and possible systematics associated with the "Dark Flow" measurements using the cumulative Sunyaev-Zeldovich (SZ) effect combined with all-sky catalogs of clusters of galaxies. Filtering of all-sky cosmic microwave background maps is required to remove the intrinsic cosmological signal down to the limit imposed by cosmic variance. Contributions to the errors come from the remaining cosmological signal, which integrates down with the number of clusters, and the ins...

  16. Distance Measurement Error Reduction Analysis for the Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Tariq Jamil SaifullahKhanzada

    2012-10-01

    Full Text Available This paper presents the DME (Distance Measurement Error estimation analysis for the wireless indoor positioning channel. The channel model for indoor positioning is derived and implemented using 8 WLAN (Wireless Local Area Network antennas system compliant to IEEE 802.11 a/b/g standard. Channel impairments are derived for the TDOA (Time Difference of Arrival range estimation. DME calculation is performed over distinct experiments in the TDOA channel profiles using 1,2,4 and 8 antennas deployed system. Analysis for the DME for different antennas is presented. The spiral antenna achieves minimum DME in the range of 1m. Data demographics scattering for the error spread in TDOA channel profile is analyzed to show the error behavior. The effect of increase in number of recordings on DME is shown by the results. Transmitter antennas behavior for DME and their standard deviations are depicted through the results, which minimize the error floor to less than 1 m. This reduction is not achieved in the literature to the best of our knowledge.

  17. Lidar Uncertainty Measurement Experiment (LUMEX) - Understanding Sampling Errors

    Science.gov (United States)

    Choukulkar, A.; Brewer, W. A.; Banta, R. M.; Hardesty, M.; Pichugina, Y.; Senff, Christoph; Sandberg, S.; Weickmann, A.; Carroll, B.; Delgado, R.; Muschinski, A.

    2016-06-01

    Coherent Doppler LIDAR (Light Detection and Ranging) has been widely used to provide measurements of several boundary layer parameters such as profiles of wind speed, wind direction, vertical velocity statistics, mixing layer heights and turbulent kinetic energy (TKE). An important aspect of providing this wide range of meteorological data is to properly characterize the uncertainty associated with these measurements. With the above intent in mind, the Lidar Uncertainty Measurement Experiment (LUMEX) was conducted at Erie, Colorado during the period June 23rd to July 13th, 2014. The major goals of this experiment were the following: Characterize sampling error for vertical velocity statistics Analyze sensitivities of different Doppler lidar systems Compare various single and dual Doppler retrieval techniques Characterize error of spatial representativeness for separation distances up to 3 km Validate turbulence analysis techniques and retrievals from Doppler lidars This experiment brought together 5 Doppler lidars, both commercial and research grade, for a period of three weeks for a comprehensive intercomparison study. The Doppler lidars were deployed at the Boulder Atmospheric Observatory (BAO) site in Erie, site of a 300 m meteorological tower. This tower was instrumented with six sonic anemometers at levels from 50 m to 300 m with 50 m vertical spacing. A brief overview of the experiment outline and deployment will be presented. Results from the sampling error analysis and its implications on scanning strategy will be discussed.

  18. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  19. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  20. Patient motion tracking in the presence of measurement errors.

    Science.gov (United States)

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  1. Numerical Integration Based on Bivariate Quartic Quasi-Interpolation Operators

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we propose a method to deal with numerical integral by using two kinds of C2 quasi-interpolation operators on the bivariate spline space, and also discuss the convergence properties and error estimates. Moreover, the proposed method is applied to the numerical evaluation of 2-D singular integrals. Numerical experiments will be carried out and the results will be compared with some previously published results.

  2. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  3. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole;

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure for...

  4. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    Directory of Open Access Journals (Sweden)

    R. M. Stauffer

    2013-08-01

    Full Text Available Several previous studies highlight pressure (or equivalently, pressure altitude discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006–2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92 are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3 mixing ratio profiles (O3MR from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km. Ozone mixing ratio errors in the 7–15 hPa layer (29–32 km, a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold. Comparisons of total column O3 yield average differences of +1.6 DU (−1.1 to +4.9 DU 10th to 90th percentiles when the O3 is integrated to burst with addition of the McPeters and Labow (2012 above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (−1.1 to +2.2 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly

  5. Microelectromechnical Systems Inertial Measurement Unit Error Modelling and Error Analysis for Low-cost Strapdown Inertial Navigation System

    OpenAIRE

    Ramalingam, R.; G. Anitha; J. Shanmugam

    2009-01-01

    This paper presents error modelling and error analysis of microelectromechnical systems (MEMS) inertial measurement unit (IMU) for a low-cost strapdown inertial navigation system (INS). The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has b...

  6. Examiner error in curriculum-based measurement of oral reading.

    Science.gov (United States)

    Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K

    2014-08-01

    Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.

  7. Development of an Abbe Error Free Micro Coordinate Measuring Machine

    Directory of Open Access Journals (Sweden)

    Qiangxian Huang

    2016-04-01

    Full Text Available A micro Coordinate Measuring Machine (CMM with the measurement volume of 50 mm × 50 mm × 50 mm and measuring accuracy of about 100 nm (2σ has been developed. In this new micro CMM, an XYZ stage, which is driven by three piezo-motors in X, Y and Z directions, can achieve the drive resolution of about 1 nm and the stroke of more than 50 mm. In order to reduce the crosstalk among X-, Y- and Z-stages, a special mechanical structure, which is called co-planar stage, is introduced. The movement of the stage in each direction is detected by a laser interferometer. A contact type of probe is adopted for measurement. The center of the probe ball coincides with the intersection point of the measuring axes of the three laser interferometers. Therefore, the metrological system of the CMM obeys the Abbe principle in three directions and is free from Abbe error. The CMM is placed in an anti-vibration and thermostatic chamber for avoiding the influence of vibration and temperature fluctuation. A series of experimental results show that the measurement uncertainty within 40 mm among X, Y and Z directions is about 100 nm (2σ. The flatness of measuring face of the gauge block is also measured and verified the performance of the developed micro CMM.

  8. Statistical Test for Bivariate Uniformity

    Directory of Open Access Journals (Sweden)

    Zhenmin Chen

    2014-01-01

    Full Text Available The purpose of the multidimension uniformity test is to check whether the underlying probability distribution of a multidimensional population differs from the multidimensional uniform distribution. The multidimensional uniformity test has applications in various fields such as biology, astronomy, and computer science. Such a test, however, has received less attention in the literature compared with the univariate case. A new test statistic for checking multidimensional uniformity is proposed in this paper. Some important properties of the proposed test statistic are discussed. As a special case, the bivariate statistic test is discussed in detail in this paper. The Monte Carlo simulation is used to compare the power of the newly proposed test with the distance-to-boundary test, which is a recently published statistical test for multidimensional uniformity. It has been shown that the test proposed in this paper is more powerful than the distance-to-boundary test in some cases.

  9. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  10. Characterization of measurement error sources in Doppler global velocimetry

    Science.gov (United States)

    Meyers, James F.; Lee, Joseph W.; Schwartz, Richard J.

    2001-04-01

    Doppler global velocimetry uses the absorption characteristics of iodine vapour to provide instantaneous three-component measurements of flow velocity within a plane defined by a laser light sheet. Although the technology is straightforward, its utilization as a flow diagnostics tool requires hardening of the optical system and careful attention to detail during data acquisition and processing if routine use in wind tunnel applications is to be achieved. A development programme that reaches these goals is presented. Theoretical and experimental investigations were conducted on each technology element to determine methods that increase measurement accuracy and repeatability. Enhancements resulting from these investigations included methods to ensure iodine vapour calibration stability, single frequency operation of the laser and image alignment to sub-pixel accuracies. Methods were also developed to improve system calibration, and eliminate spatial variations of optical frequency in the laser output, spatial variations in optical transmissivity and perspective and optical distortions in the data images. Each of these enhancements is described and experimental examples given to illustrate the improved measurement performance obtained by the enhancement. The culmination of this investigation was the measured velocity profile of a rotating wheel resulting in a 1.75% error in the mean with a standard deviation of 0.5 m s-1. Comparing measurements of a jet flow with corresponding Pitot measurements validated the use of these methods for flow field applications.

  11. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    OpenAIRE

    Zhao, Shanshan; Prentice, Ross L.

    2014-01-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator ...

  12. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  13. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  14. On the Measurement of Privacy as an Attacker's Estimation Error

    CERN Document Server

    Rebollo-Monedero, David; Diaz, Claudia; Forné, Jordi

    2011-01-01

    A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundame...

  15. Francesca Hughes: Architecture of Error: Matter, Measure and the Misadventure of Precision

    DEFF Research Database (Denmark)

    Foote, Jonathan

    2016-01-01

    Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)......Review of "Architecture of Error: Matter, Measure and the Misadventure of Precision" by Francesca Hughes (MIT Press, 2014)...

  16. Statistical Methods to Adjust for Measurement Error in Risk Prediction Models and Observational Studies

    OpenAIRE

    Braun, Danielle

    2013-01-01

    The first part of this dissertation focuses on methods to adjust for measurement error in risk prediction models. In chapter one, we propose a nonparametric adjustment for measurement error in time to event data. Measurement error in time to event data used as a predictor will lead to inaccurate predictions. This arises in the context of self-reported family history, a time to event covariate often measured with error, used in Mendelian risk prediction models. Using validation data, we propos...

  17. Bivariate ensemble model output statistics approach for joint forecasting of wind speed and temperature

    Science.gov (United States)

    Baran, Sándor; Möller, Annette

    2016-06-01

    Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.

  18. Comparative temperature measurement errors in high thermal gradient fields

    International Nuclear Information System (INIS)

    Accurate measurement of temperature in tumor and surrounding host tissue remains one of the major difficulties in clinical hyperthermia. The need for nonperturbable probes that can operate in electromagnetic and ultrasonic fields has been well established. Less attention has been given to the need for nonperturbing probes-temperature probes that do not alter the thermal environments they are sensing. This is important in situations where the probe traverses relatively high temperature gradients such as those resulting from significant differentials in local SAR, blood flow, and thermal properties. Errors are reduced when the thermal properties of the probe and tumor tissue are matched. The ideal transducer would also have low thermal mass and microwave and/or ultrasonic absorption characteristics matched to tissue. Perturbations induced in the temperature gradient field by virtue of axial conduction along the probe shaft were compared for several of the available multisensor temperature probes as well as several prototype multisensor temperature transducers. Well calibrated thermal gradients ranging from 0 to 100C/cm were produced with a stability of 2 millidegrees per minute. Probes compared were: the three sensor YSI thermocouple probe, 14 sensor thermistor needle probe, 10 sensor ion-implanted silicon substrate resistance probe, and the multisensor resistance probe, and the multisensor resistance probe fabricated using microelectronic techniques

  19. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    OpenAIRE

    Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf

    2014-01-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simula...

  20. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    OpenAIRE

    Wilson, M.D.; Durand, M.; H. C. Jung; D. Alsdorf

    2015-01-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate...

  1. ERROR PROCESSING METHOD OF CYCLOIDAL GEAR MEASUREMENT USING 3D COORDINATES MEASURING MACHINE

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    An error processing method is presented based on optimization theory and microcomputer technique which can be successfully used in the cycloidal gear measurement on three dimensional coordinates measuring machine (CMM). In the procedure, the minimum quadratic sum of the normal deviation is used as the object function and the equidistant curve is dealed with instead of the teeth profile. CMM is a high accurate measuring machine which can provide a way to evaluate the accuracy of the cycloidal gear completely.

  2. A new bivariate negative binomial regression model

    Science.gov (United States)

    Faroughi, Pouya; Ismail, Noriszura

    2014-12-01

    This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.

  3. A family of bivariate Pareto distributions

    Directory of Open Access Journals (Sweden)

    P. G. Sankaran

    2014-06-01

    Full Text Available Pareto distributions have been extensively used in literature for modelling and analysis of income and lifetime data. In the present paper, we introduce a family of bivariate Pareto distributions using a generalized version of dullness property. Some important bivariate Pareto distributions are derived as special cases. Distributional properties of the family are studied. The dependency structure of the family is investigated. Finally, the family of distributions is applied to two real life data situation

  4. Constructions for a bivariate beta distribution

    OpenAIRE

    Olkin, Ingram; Trikalinos, Thomas A.

    2014-01-01

    The beta distribution is a basic distribution serving several purposes. It is used to model data, and also, as a more flexible version of the uniform distribution, it serves as a prior distribution for a binomial probability. The bivariate beta distribution plays a similar role for two probabilities that have a bivariate binomial distribution. We provide a new multivariate distribution with beta marginal distributions, positive probability over the unit square, and correlations over the full ...

  5. Specification and Measurement of Mid-Frequency Wavefront Errors

    Institute of Scientific and Technical Information of China (English)

    XUAN Bin; XIE Jing-jiang

    2006-01-01

    Mid-frequency wavefront errors can be of the most importance for some optical components, but they're not explicitly covered by corresponding international standards such as ISO 10110. The testing methods for the errors also have a lot of aspects to be improved. This paper gives an overview of the specifications especially of PSD. NIF,developed by America, and XMM, developed by Europe, have both discovered some new testing methods.

  6. Angle measurement error and compensation for decentration rotation of circular gratings

    Institute of Scientific and Technical Information of China (English)

    CHEN Xi-jun; WANG Zhen-huan; ZENG Qing-shuang

    2010-01-01

    As the geometric center of circular grating does not coincide with the rotation center,the angle measurement error of circular grating is analyzed.Based on the moire fringe equations in decentration condition,the mathematical model of angle measurement error is derived.It is concluded that the deeentration between the centre of circular grating and the center of revolving shaft leads to the first-harmonic error of angle measurement.The correctness of the result is proved by experimental data.The method of error compensation is presented,and the angle measurement accuracy of the circular grating is effectively improved by the error compensation.

  7. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2014-08-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i) characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões) River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  8. Swath altimetry measurements of the mainstem Amazon River: measurement errors and hydraulic implications

    Directory of Open Access Journals (Sweden)

    M. D. Wilson

    2014-08-01

    Full Text Available The Surface Water and Ocean Topography (SWOT mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations. In this paper, we aimed to (i characterize and illustrate in two-dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii assess the impact of each of these on estimates of water surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a "virtual mission" for a 300 km reach of the central Amazon (Solimões River at its confluence with the Purus River, using a hydraulic model to provide water surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. We thereby obtained water surface elevation measurements for the Amazon mainstem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths of greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-section averaging and 20 km reach lengths, results show Nash–Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1% average overall error in discharge, respectively.

  9. Pivot and cluster strategy: a preventive measure against diagnostic errors

    Directory of Open Access Journals (Sweden)

    Shimizu T

    2012-11-01

    Full Text Available Taro Shimizu,1 Yasuharu Tokuda21Rollins School of Public Health, Emory University, Atlanta, GA, USA; 2Institute of Clinical Medicine, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, JapanAbstract: Diagnostic errors constitute a substantial portion of preventable medical errors. The accumulation of evidence shows that most errors result from one or more cognitive biases and a variety of debiasing strategies have been introduced. In this article, we introduce a new diagnostic strategy, the pivot and cluster strategy (PCS, encompassing both of the two mental processes in making diagnosis referred to as the intuitive process (System 1 and analytical process (System 2 in one strategy. With PCS, physicians can recall a set of most likely differential diagnoses (System 2 of an initial diagnosis made by the physicians’ intuitive process (System 1, thereby enabling physicians to double check their diagnosis with two consecutive diagnostic processes. PCS is expected to reduce cognitive errors and enhance their diagnostic accuracy and validity, thereby realizing better patient outcomes and cost- and time-effective health care management.Keywords: diagnosis, diagnostic errors, debiasing

  10. Measurement errors in retrospective reports of event histories : a validation study with Finnish register data

    OpenAIRE

    Pyy-Martikainen, Marjo; Rendtel, Ulrich

    2009-01-01

    "It is well known that retrospective survey reports of event histories are affected by measurement errors. Yet little is known about the determinants of measurement errors in event history data or their effects on event history analysis. Making use of longitudinal register data linked at person-level with longitudinal survey data, we provide novel evidence about 1. type and magnitude of measurement errors in survey reports of event histories, 2. validity of classical assumptions about measure...

  11. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  12. Non-differential measurement error does not always bias diagnostic likelihood ratios towards the null

    Directory of Open Access Journals (Sweden)

    Fosgate GT

    2006-07-01

    Full Text Available Abstract Diagnostic test evaluations are susceptible to random and systematic error. Simulated non-differential random error for six different error distributions was evaluated for its effect on measures of diagnostic accuracy for a brucellosis competitive ELISA. Test results were divided into four categories:

  13. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.;

    2010-01-01

    : scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  14. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  15. Microelectromechnical Systems Inertial Measurement Unit Error Modelling and Error Analysis for Low-cost Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    R. Ramalingam

    2009-11-01

    Full Text Available This paper presents error modelling and error analysis of microelectromechnical systems (MEMS inertial measurement unit (IMU for a low-cost strapdown inertial navigation system (INS. The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has been introduced to remove the high frequency noise that affects the sensors to obtain the original values of angular rates and accelerations with less noise. This increases the accuracy of the strapdown INS. The results show the effect of errors in the output of sensors, easy interpretation of random errors by Allan variance, the increase in the accuracy when wavelet decomposition is used for denoising inertial sensor raw data.Defence Science Journal, 2009, 59(6, pp.650-658, DOI:http://dx.doi.org/10.14429/dsj.59.1571

  16. Integrated Geometric Errors Simulation, Measurement and Compensation of Vertical Machining Centres

    OpenAIRE

    Gohel, C.K.; Makwana, A.H.

    2014-01-01

    This Paper presents research on geometric errors of simulated geometry which are measured and compensate in vertical machining centres. There are many errors in CNC machine tools have effect on the accuracy and repeatability of manufacture. Most of these errors are based on specific parameters such as the strength and the stress, the dimensional deviations of the structure of the machine tool, thermal variations, cutting force induced errors and tool wear. In this paper machining system that ...

  17. Information-theoretic approach to quantum error correction and reversible measurement

    CERN Document Server

    Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard

    1997-01-01

    Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.

  18. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    OpenAIRE

    Ivan M Roitt; Torben Lund; Callaghan, Martina F.; Richard H Bayford

    2010-01-01

    Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. ...

  19. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  20. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α2) QED correction in leading-log approximation. (J.P.N.)

  1. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  2. Systematic errors in cosmic microwave background polarization measurements

    CERN Document Server

    O'Dea, D; Johnson, B R; Dea, Daniel O'; Challinor, Anthony

    2006-01-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Muller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors throug...

  3. Measurement errors and scaling relations in astrophysics: a review

    CERN Document Server

    Andreon, S

    2012-01-01

    This review article considers some of the most common methods used in astronomy for regressing one quantity against another in order to estimate the model parameters or to predict an observationally expensive quantity using trends between object values. These methods have to tackle some of the awkward features prevalent in astronomical data, namely heteroscedastic (point-dependent) errors, intrinsic scatter, non-ignorable data collection and selection effects, data structure and non-uniform population (often called Malmquist bias), non-Gaussian data, outliers and mixtures of regressions. We outline how least square fits, weighted least squares methods, Maximum Likelihood, survival analysis, and Bayesian methods have been applied in the astrophysics literature when one or more of these features is present. In particular we concentrate on errors-in-variables regression and we advocate Bayesian techniques.

  4. Methodical errors of measurement of the human body tissues electrical parameters

    OpenAIRE

    Antoniuk, O.; Pokhodylo, Y.

    2015-01-01

    Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.

  5. Measurement errors in dietary assessment using duplicate portions as reference method

    NARCIS (Netherlands)

    Trijsburg, L.E.

    2016-01-01

    Measurement errors in dietary assessment using duplicate portions as reference method Laura Trijsburg Background: As Food Frequency Questionnaires (FFQs) are subject to measurement error, associations between self-reported intake by FFQ and outcome measures should b

  6. Incomplete Bivariate Fibonacci and Lucas -Polynomials

    Directory of Open Access Journals (Sweden)

    Dursun Tasci

    2012-01-01

    Full Text Available We define the incomplete bivariate Fibonacci and Lucas -polynomials. In the case =1, =1, we obtain the incomplete Fibonacci and Lucas -numbers. If =2, =1, we have the incomplete Pell and Pell-Lucas -numbers. On choosing =1, =2, we get the incomplete generalized Jacobsthal number and besides for =1 the incomplete generalized Jacobsthal-Lucas numbers. In the case =1, =1, =1, we have the incomplete Fibonacci and Lucas numbers. If =1, =1, =1, =⌊(−1/(+1⌋, we obtain the Fibonacci and Lucas numbers. Also generating function and properties of the incomplete bivariate Fibonacci and Lucas -polynomials are given.

  7. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-10-01

    Full Text Available Coordinate measuring machines (CMM are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I. It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities.

  8. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  9. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  10. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    Science.gov (United States)

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  11. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    Science.gov (United States)

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  12. Detecting bit-flip errors in a logical qubit using stabilizer measurements.

    Science.gov (United States)

    Ristè, D; Poletto, S; Huang, M-Z; Bruno, A; Vesterinen, V; Saira, O-P; DiCarlo, L

    2015-04-29

    Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.

  13. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    Science.gov (United States)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  14. BIVARIATE ESTIMATION WITH LEFT-TRUNCATED DATA

    Institute of Scientific and Technical Information of China (English)

    孙六全; 任浩波

    2001-01-01

    In this article we construct a kernel estimate of the probability function from bivariate data when a component is subject to random left-truncation. We establish consistency and asymptotic normality of the proposed estimator using a strong approximation result. Simulation studies show that the proposed procedure gives a good estimate of the true density function even when the sample size is moderate.

  15. Bivariate dynamic probit models for panel data

    OpenAIRE

    Alfonso Miranda

    2010-01-01

    In this talk, I will discuss the main methodological features of the bivariate dynamic probit model for panel data. I will present an example using simulated data, giving special emphasis to the initial conditions problem in dynamic models and the difference between true and spurious state dependence. The model is fit by maximum simulated likelihood.

  16. A truncated bivariate inverted Dirichlet distribution

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2013-05-01

    Full Text Available A truncated version of the bivariate inverted dirichlet distribution is introduced. Unlike the inverted dirichlet distribution, this possesses finite moments of all orders and could therefore be a better model for certain practical situations. One such situation is discussed. The moments, maximum likelihood estimators and the Fisher information matrix for the truncated distribution are derived.

  17. BIVARIATE REAL-VALUED ORTHOGONAL PERIODIC WAVELETS

    Institute of Scientific and Technical Information of China (English)

    Qiang Li; Xuezhang Liang

    2005-01-01

    In this paper, we construct a kind of bivariate real-valued orthogonal periodic wavelets. The corresponding decomposition and reconstruction algorithms involve only 8 terms respectively which are very simple in practical computation. Moreover, the relation between periodic wavelets and Fourier series is also discussed.

  18. BIVARIATE FRACTAL INTERPOLATION FUNCTIONS ON RECTANGULAR DOMAINS

    Institute of Scientific and Technical Information of China (English)

    Xiao-yuan Qian

    2002-01-01

    Non-tensor product bivariate fractal interpolation functions defined on gridded rectangular domains are constructed. Linear spaces consisting of these functions are introduced.The relevant Lagrange interpolation problem is discussed. A negative result about the existence of affine fractal interpolation functions defined on such domains is obtained.

  19. Working with Error and Uncertainty to Increase Measurement Validity

    Science.gov (United States)

    Amrein-Beardsley, Audrey; Barnett, Joshua H.

    2012-01-01

    Over the previous two decades, the era of accountability has amplified efforts to measure educational effectiveness more than Edward Thorndike, the father of educational measurement, likely would have imagined. Expressly, the measurement structure for evaluating educational effectiveness continues to rely increasingly on one sole…

  20. Period, epoch and prediction errors of ephemeris from continuous sets of timing measurements

    CERN Document Server

    Deeg, Hans J

    2015-01-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of such time series is derived: sigma_P = sigma_T (12/( N^3-N))^0.5, where sigma_P is the period error; sigma_T the timing error of a single measurement and N the number of measurements. Relative to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, the usual linear ephemeris where epoch errors are quoted for the first time measurement, are prone to overestimation of the error of that prediction. This may be avoided...

  1. Measurement of implicit associations between emotional states and computer errors using the implicit association test

    Directory of Open Access Journals (Sweden)

    Maricutoiu, Laurentiu P.

    2011-12-01

    Full Text Available Previous research identified two main emotional outcomes of computer error: anxiety and frustration. These emotions have been associated with low levels of performance in using a computer. The present research used innovative methodology for studying the relations between computer error messages, user anxiety and user frustration. We used the Implicit Association Test (IAT to measure automated associations between error messages and these two emotional outcomes. A sample of 80 participants completed two questionnaires and two IAT designs. Results indicated that user error messages are more strongly associated with anxiety, than with frustration. Personal characteristics such as emotional stability and English proficiency were significantly associated with the implicit anxiety measure, but not with the frustration measure. No significant relations were found between two measures of computer experience and the emotional measures. These results indicated that error related anxiety is associated with personal characteristics.

  2. Validation of Large-Scale Geophysical Estimates Using In Situ Measurements with Representativeness Error

    Science.gov (United States)

    Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.

    2015-12-01

    Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.

  3. The Impact of Truancy on Educational Attainment : A Bivariate Ordered Probit Estimator with Mixed Effects

    OpenAIRE

    Buscha, Franz; Conte, Anna

    2010-01-01

    This paper investigates the relationship between educational attainment and truancy. Using data from the Youth Cohort Study of England and Wales, we estimate the causal impact that truancy has on educational attainment at age 16. Problematic is that both truancy and attainment are measured as ordered responses requiring a bivariate ordered probit model to account for the potential endogeneity of truancy. Furthermore, we extent the 'naïve' bivariate ordered probit estimator to include mixed ef...

  4. An inquisition into bivariate threshold effects in the inflation-growth correlation: Evaluating South Africa’s macroeconomic objectives

    OpenAIRE

    Andrew Phiri

    2013-01-01

    Is the SARB’s inflation target of 3-6% compatible with the 6% economic growth objective set by ASGISA? Estimations of inflation-growth bivariate Threshold Vector Autoregressive with corresponding bivariate Threshold Vector Error Correction (BTVEC-BTVAR) econometric models for sub-periods coupled with the South African inflation-growth experience between 1960 and 2010; suggest on optimal inflation-growth combinations for South African data presenting a two-fold proposition. Firstly, for the pe...

  5. Generalized Symmetric Divergence Measures and the Probability of Error

    OpenAIRE

    Taneja, Inder Jeet

    2011-01-01

    There are three classical divergence measures exist in the literature on information theory and statistics. These are namely, Jeffryes-Kullback-Leiber J-divergence. Sibson-Burbea-Rao Jensen-Shannon divegernce and Taneja Arithmetic-Geometric divergence. These three measures bear an interesting relationship among each other. The divergence measures like Hellinger discrimination, symmetric chi-square divergence, and triangular discrimination are also known in the literature. In this paper, we ha...

  6. Quantifying Error in Survey Measures of School and Classroom Environments

    Science.gov (United States)

    Schweig, Jonathan David

    2014-01-01

    Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…

  7. Errors in anthropometric measurements in neonates and infants

    Directory of Open Access Journals (Sweden)

    D Harrison

    2001-09-01

    Full Text Available The accuracy of methods used in Cape Town hospitals and clinics for the measurement of weight, length and age in neonates and infants became suspect during a survey of 12 local authority and 5 private sector clinics in 1994-1995 (Harrison et al. 1998. A descriptive prospective study to determine the accuracy of these methods in neonates at four maternity hospitals [ 2 public and 2 private] and infants at four child health clinics of the Cape Town City Council was carried out. The main outcome measures were an assessment of three currently used methods namely to measure crown-heel length with a measuring board, a mat and a tape measure; a comparison of weight differences when an infant is fully clothed, naked and in napkin only; and the differences in age estimated by calendar dates and by a specially designed electronic calculator. The results showed that the current methods which are used to measure infants in Cape Town vary widely from one institution to another. Many measurements are inaccurate and there is a real need for uniformity and accuracy. This can only be implemented by an effective education program so as to ensure that accurate measurements are used in monitoring the health of young children in Cape Town and elsewhere.

  8. Measurement error analysis of Brillouin lidar system using F-P etalon and ICCD

    Science.gov (United States)

    Yao, Yuan; Niu, Qunjie; Liang, Kun

    2016-09-01

    Brillouin lidar system using Fabry-Pérot (F-P) etalon and Intensified Charge Coupled Device (ICCD) is capable of real time remote measuring of properties like temperature of seawater. The measurement accuracy is determined by two key parameters, Brillouin frequency shift and Brillouin linewidth. Three major errors, namely the laser frequency instability, the calibration error of F-P etalon and the random shot noise are discussed. Theoretical analysis combined with simulation results showed that the laser and F-P etalon will cause about 4 MHz error to both Brillouin shift and linewidth, and random noise bring more error to linewidth than frequency shift. A comprehensive and comparative analysis of the overall errors under various conditions proved that colder ocean(10 °C) is more accurately measured with Brillouin linewidth, and warmer ocean (30 °C) is better measured with Brillouin shift.

  9. Identification in a Generalization of Bivariate Probit Models with Endogenous Regressors

    OpenAIRE

    Sukjin Han; Edward J. Vytlacil

    2013-01-01

    This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure, including the normal copula as a special case, while the marginal distributions of the latent error terms are allowed to be arbitrary but known. This class of models includes bivariate probit models as a special case. The paper demonstrates that a...

  10. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    Science.gov (United States)

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  11. Design and application of location error teaching aids in measuring and visualization

    OpenAIRE

    Yu Fengning; Li Lei; Guo Jian; Mai Songman; Shi Jiashun

    2015-01-01

    As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the positi...

  12. The effect of measurement error on the dose-response curve.

    OpenAIRE

    Yoshimura, I

    1990-01-01

    In epidemiological studies for an environmental risk assessment, doses are often observed with errors. However, they have received little attention in data analysis. This paper studies the effect of measurement errors on the observed dose-response curve. Under the assumptions of the monotone likelihood ratio on errors and a monotone increasing dose-response curve, it is verified that the slope of the observed dose-response curve is likely to be gentler than the true one. The observed variance...

  13. Discrete time interval measurement system: fundamentals, resolution and errors in the measurement of angular vibrations

    International Nuclear Information System (INIS)

    The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement

  14. Measurement and analysis of typical motion error traces from a circular test

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The circular test provides a rapid and efficient way of measuring the contouring accuracy of a machine tool.To get the actual point coordinate in the work plane,an improved measurement instrument - a new ball bar test system - is presented in this paper to identify both the radial error and the rotation angle error when the machine is manipulated to move in circular traces.Based on the measured circular error,a combination of Fourier components is chosen to represent the systematic form error that fluctuates in the radial direction.The typical motion errors represented by the corresponding Fourier components can thus be identified.The values for machine compensation can be calculated and adjusted until the desired results are achieved.

  15. A measuring and correcting method about locus errors in robot welding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very difficult to weld automatically and continuously by the method of teaching and playing. In this paper, a kind of error measuring system is presented. By which the position and orientation errors of tubules relative to the teaching one can be measured. And, a method to correct the locus errors is also proposed, by which the moving locus planned via teaching points can be corrected in real time according to measured error parameters. So that, just by teaching one, all tubules on a bobbin could be welded automatically.

  16. Comparing objective and subjective error measures for color constancy

    NARCIS (Netherlands)

    M.P. Lucassen; A. Gijsenij; T. Gevers

    2008-01-01

    We compare an objective and a subjective performance measure for color constancy algorithms. Eight hyper-spectral images were rendered under a neutral reference illuminant and four chromatic illuminants (Red, Green, Yellow, Blue). The scenes rendered under the chromatic illuminants were color correc

  17. From Measurements Errors to a New Strain Gauge Design

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Zike, Sanita; Salviato, Marco;

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff method...

  18. Measurement of Root-Mean-Square Phase Errors in Arrayed Waveguide Gratings

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiao-Ping; CHU Yuan-Liang; ZHAO Wei; ZHANG Han-Yi; GUO Yi-Li

    2004-01-01

    @@ The interference-based method to measure the root-mean-square phase errors in SiO2-based arrayed waveguide gratings (AWGs) is presented. The experimental results show that the rms phase error of the tested AWG is 0. 72 rad.

  19. Variability in Reliability Coefficients and the Standard Error of Measurement from School District to District.

    Science.gov (United States)

    Feldt, Leonard S.; Qualls, Audrey L.

    1999-01-01

    Examined the stability of the standard error of measurement and the relationship between the reliability coefficient and the variance of both true scores and error scores for 170 school districts in a state. As expected, reliability coefficients varied as a function of group variability, but the variation in split-half coefficients from school to…

  20. Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech

    Science.gov (United States)

    Betz, Stacy K.; Stoel-Gammon, Carol

    2005-01-01

    Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…

  1. On Bivariate Generalized Exponential-Power Series Class of Distributions

    OpenAIRE

    Jafari, Ali Akbar; Roozegar, Rasool

    2015-01-01

    In this paper, we introduce a new class of bivariate distributions by compounding the bivariate generalized exponential and power-series distributions. This new class contains some new sub-models such as the bivariate generalized exponential distribution, the bivariate generalized exponential-poisson, -logarithmic, -binomial and -negative binomial distributions. We derive different properties of the new class of distributions. The EM algorithm is used to determine the maximum likelihood estim...

  2. Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement

    Institute of Scientific and Technical Information of China (English)

    WANG Jiongqi; XIONG Kai; ZHOU Haiyin

    2012-01-01

    The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.

  3. Reliability for some bivariate beta distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate beta. The calculations involve the use of special functions.

  4. Reliability for some bivariate gamma distributions

    Directory of Open Access Journals (Sweden)

    Nadarajah Saralees

    2005-01-01

    Full Text Available In the area of stress-strength models, there has been a large amount of work as regards estimation of the reliability R=Pr( Xbivariate distribution with dependence between X and Y . In particular, we derive explicit expressions for R when the joint distribution is bivariate gamma. The calculations involve the use of special functions.

  5. Bivariate copula in fitting rainfall data

    Science.gov (United States)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  6. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  7. Copula-based bivariate binary response models

    OpenAIRE

    Winkelmann, Rainer

    2009-01-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor on a binary outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence among the two variables using copulas. Simulation results and evidence from two applications, one on the effect of insurance status on ambulatory expenditure and one on the effect of completing high school on sub...

  8. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  9. Sources of errors in the measurements of underwater profiling radiometer

    Digital Repository Service at National Institute of Oceanography (India)

    Silveira, N.; Suresh, T.; Talaulikar, M.; Desa, E.; Matondkar, S.G.P.; Lotlikar, A.

    cloud patches during measurements, depth offset adjustment to determine the exact surface depth, mounting of the reference solar irradiance sensor, tilt of the instrument with respect to the vertical, and correction due to temperature variations and dark... often surfaced far away (~50 m) from the ship due to the currents. After the cast, when it is brought up with the help of the tethered cable, the turbulence at the surface due to waves, does not allow the radiometer to be in vertical position even when...

  10. Dyadic Bivariate Fourier Multipliers for Multi-Wavelets in L2(R2)

    Institute of Scientific and Technical Information of China (English)

    Zhongyan Li∗; Xiaodi Xu

    2015-01-01

    The single 2 dilation orthogonal wavelet multipliers in one dimensional case and single A-dilation (where A is any expansive matrix with integer entries and|detA|=2) wavelet multipliers in high dimensional case were completely characterized by the Wutam Consortium (1998) and Z. Y. Li, et al. (2010). But there exist no more results on orthogonal multivariate wavelet matrix multipliers corresponding integer expansive dilation matrix with the absolute value of determinant not 2 in L2(R2). In this paper, we choose as the dilation matrix and consider the 2I2-dilation orthogonal multivariate wavelet Y={y1,y2,y3}, (which is called a dyadic bivariate wavelet) multipliers. We call the 3×3 matrix-valued function A(s)=[ fi,j(s)]3×3, where fi,j are measurable functions, a dyadic bivariate matrix Fourier wavelet multiplier if the inverse Fourier transform of A(s)(cy1(s),cy2(s),cy3(s))⊤ = ( b g1(s), b g2(s), b g3(s))⊤ is a dyadic bivariate wavelet whenever (y1,y2,y3) is any dyadic bivariate wavelet. We give some conditions for dyadic matrix bivariate wavelet multipliers. The results extended that of Z. Y. Li and X. L. Shi (2011). As an application, we construct some useful dyadic bivariate wavelets by using dyadic Fourier matrix wavelet multipliers and use them to image denoising.

  11. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard;

    2015-01-01

    errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...... deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors....

  12. A Nonlinear Consensus Protocol of Multiagent Systems Considering Measuring Errors

    Directory of Open Access Journals (Sweden)

    Xiaochu Wang

    2013-01-01

    Full Text Available In order to avoid a potential waste of energy during consensus controls in the case where there exist measurement uncertainties, a nonlinear protocol is proposed for multiagent systems under a fixed connected undirected communication topology and extended to both the cases with full and partial access a reference. Distributed estimators are utilized to help all agents agree on the understandings of the reference, even though there may be some agents which cannot access to the reference directly. An additional condition is also considered, where self-known configuration offsets are desired. Theoretical analyses of stability are given. Finally, simulations are performed, and results show that the proposed protocols can lead agents to achieve loose consensus and work effectively with less energy cost to keep the formation, which have illustrated the theoretical results.

  13. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  14. Effect of Measurement Errors on Predicted Cosmological Constraints from Shear Peak Statistics with LSST

    CERN Document Server

    Bard, D; Chang, C; May, M; Kahn, S M; AlSayyad, Y; Ahmad, Z; Bankert, J; Connolly, A; Gibson, R R; Gilmore, K; Grace, E; Haiman, Z; Hannel, M; Huffenberger, K M; Jernigan, J G; Jones, L; Krughoff, S; Lorenz, S; Marshall, S; Meert, A; Nagarajan, S; Peng, E; Peterson, J; Rasmussen, A P; Shmakova, M; Sylvestre, N; Todd, N; Young, M

    2013-01-01

    The statistics of peak counts in reconstructed shear maps contain information beyond the power spectrum, and can improve cosmological constraints from measurements of the power spectrum alone if systematic errors can be controlled. We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST image simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  15. Analysis of measurement errors influence on experimental determination of mass and heat transfer coefficient

    International Nuclear Information System (INIS)

    The influence of temperature and concentration measurement errors on experimental determination of mass and heat transfer coefficients is analysed. Calculus model of coefficients and of measurement errors, the experimental data obtained on the water isotopic distillation plant and the results of determinations are presented. The experimental distillation column, with inner diameter of 108 mm, have been equipped with B7 structured packing on a height of 14 m. This column offers the possibility to measure vapour temperature and isotopic concentration in 12 locations. For error propagation analysis, the parameters measured for each packing bed, namely temperature and isotopic concentration of the vapour, were used. A relation for calculation of maximum error of experimental determinations of mass and heat transoprt coefficients is given. The experimental data emphasize the 'ending effects' and regions with bad thermal insulation. (author)

  16. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  17. Why Do Increased Arrest Rates Appear to Reduce Crime: Deterrence, Incapacitation, or Measurement Error?

    OpenAIRE

    Steven D. Levitt

    1995-01-01

    A strong, negative empirical correlation exists between arrest rates and reported crime rates. While this relationship has often been interpreted as support for the deterrence hypothesis, it is equally consistent with incapacitation effects, and/or a spurious correlation that would be induced by measurement error in reported crime rates. This paper attempts to discriminate between deterrence, incapacitation, and measurement error as explanations for the empirical relationship between arrest r...

  18. Does adjustment for measurement error induce positive bias if there is no true association?

    OpenAIRE

    Burstyn, Igor

    2009-01-01

    This article is a response to an off-the-record discussion that I had at an international meeting of epidemiologists. It centered on a concern, perhaps widely spread, that measurement error adjustment methods can induce positive bias in results of epidemiological studies when there is no true association. I trace the possible history of this supposition and test it in a simulation study of both continuous and binary health outcomes under a classical multiplicative measurement error model. A B...

  19. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  20. Error Analysis and Measurement Uncertainty for a Fiber Grating Strain-Temperature Sensor

    OpenAIRE

    Jian-Neng Wang; Jaw-Luen Tang

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10−6 ε and 3.59 × 10−5 ε + 0.0...

  1. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  2. COMPENSATION OF MEASUREMENT ERRORS WHEN REDUCING LINEAR DIMENSIONS OF THE KELVIN PROBE

    Directory of Open Access Journals (Sweden)

    A. K. Tyavlovsky

    2013-01-01

    Full Text Available The study is based on results of modeling of measurement circuit containing vibrating-plate capacitor using a complex-harmonic analysis technique. Low value of normalized frequency of small-sized scanning Kelvin probe leads to high distortion factor of probe’s measurement signal that in turn leads to high measurement errors. The way to lower measurement errors is to register measurement signal on its second harmonic and to control the probe-to-sample gap by monitoring the ratio between the second and the first harmonics’ amplitudes.

  3. The holographic reconstructing algorithm and its error analysis about phase-shifting phase measurement

    Institute of Scientific and Technical Information of China (English)

    LU Xiaoxu; ZHONG Liyun; ZHANG Yimo

    2007-01-01

    Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.

  4. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  5. Errors in Thermographic Camera Measurement Caused by Known Heat Sources and Depth Based Correction

    Directory of Open Access Journals (Sweden)

    Mark Christian E. Manuel

    2016-03-01

    Full Text Available Thermal imaging has shown to be a better tool for the quantitative measurement of temperature than single spot infrared thermometers. However, thermographic cameras can encounter errors in acquiring accurate temperature measurements in the presence of other environmental heat sources. Some of these errors arise due to the inability of the thermal camera to detect objects and features in the infrared domain. In this paper, the thermal image is registered as a stereo image from a Kinect system prior to depth-based correction. Experiments demonstrating the error are presented together with the determination of the measurement errors under prior knowledge of the thermographed scene. The proposed correction scheme improves the accuracy of the thermal image through augmentation using the Kinect system.

  6. A note on finding peakedness in bivariate normal distribution using Mathematica

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2007-07-01

    Full Text Available Peakedness measures the concentration around the central value. A classical standard measure of peakedness is kurtosis which is the degree of peakedness of a probability distribution. In view of inconsistency of kurtosis in measuring of the peakedness of a distribution, Horn (1983 proposed a measure of peakedness for symmetrically unimodal distributions. The objective of this paper is two-fold. First, Horn’s method has been extended for bivariate normal distribution. Secondly, to show that computer algebra system Mathematica can be extremely useful tool for all sorts of computation related to bivariate normal distribution. Mathematica programs are also provided.

  7. An AFM-based methodology for measuring axial and radial error motions of spindles

    International Nuclear Information System (INIS)

    This paper presents a novel atomic force microscopy (AFM)-based methodology for measurement of axial and radial error motions of a high precision spindle. Based on a modified commercial AFM system, the AFM tip is employed as a cutting tool by which nano-grooves are scratched on a flat surface with the rotation of the spindle. By extracting the radial motion data of the spindle from the scratched nano-grooves, the radial error motion of the spindle can be calculated after subtracting the tilting errors from the original measurement data. Through recording the variation of the PZT displacement in the Z direction in AFM tapping mode during the spindle rotation, the axial error motion of the spindle can be obtained. Moreover the effects of the nano-scratching parameters on the scratched grooves, the tilting error removal method for both conditions and the method of data extraction from the scratched groove depth are studied in detail. The axial error motion of 124 nm and the radial error motion of 279 nm of a commercial high precision air bearing spindle are achieved by this novel method, which are comparable with the values provided by the manufacturer, verifying this method. This approach does not need an expensive standard part as in most conventional measurement approaches. Moreover, the axial and radial error motions of the spindle can both be obtained, indicating that this is a potential means of measuring the error motions of the high precision moving parts of ultra-precision machine tools in the future. (paper)

  8. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    Science.gov (United States)

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  9. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  10. An in-process form error measurement system for precision machining

    International Nuclear Information System (INIS)

    In-process form error measurement for precision machining is studied. Due to two key problems, opaque barrier and vibration, the study of in-process form error optical measurement for precision machining has been a hard topic and so far very few existing research works can be found. In this project, an in-process form error measurement device is proposed to deal with the two key problems. Based on our existing studies, a prototype system has been developed. It is the first one of the kind that overcomes the two key problems. The prototype is based on a single laser sensor design of 50 nm resolution together with two techniques, a damping technique and a moving average technique, proposed for use with the device. The proposed damping technique is able to improve vibration attenuation by up to 21 times compared to the case of natural attenuation. The proposed moving average technique is able to reduce errors by seven to ten times without distortion to the form profile results. The two proposed techniques are simple but they are especially useful for the proposed device. For a workpiece sample, the measurement result under coolant condition is only 2.5% larger compared with the one under no coolant condition. For a certified Wyko test sample, the overall system measurement error can be as low as 0.3 µm. The measurement repeatability error can be as low as 2.2%. The experimental results give confidence in using the proposed in-process form error measurement device. For better results, further improvement in design and tests are necessary

  11. Techniques for reducing error in the calorimetric measurement of low wattage items

    Energy Technology Data Exchange (ETDEWEB)

    Sedlacek, W.A.; Hildner, S.S.; Camp, K.L.; Cremers, T.L.

    1993-08-01

    The increased need for the measurement of low wattage items with production calorimeters has required the development of techniques to maximize the precision and accuracy of the calorimeter measurements. An error model for calorimetry measurements is presented. This model is used as a basis for optimizing calorimetry measurements through baseline interpolation. The method was applied to the heat measurement of over 100 items and the results compared to chemistry assay and mass spectroscopy.

  12. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  13. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    Science.gov (United States)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  14. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Science.gov (United States)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang; Hwang, Ching-Shiang

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  15. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela

    2016-05-11

    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  16. Reliability for some bivariate exponential distributions

    Directory of Open Access Journals (Sweden)

    Saralees Nadarajah

    2006-01-01

    reliability R=Pr⁡(Xbivariate distributions with dependence between X and Y. In particular, explicit expressions for R are derived when the joint distribution isbivariate exponential. The calculations involve the use of special functions. An application of the results is also provided.

  17. APPROXIMATE SAMPLING THEOREM FOR BIVARIATE CONTINUOUS FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨守志; 程正兴; 唐远炎

    2003-01-01

    An approximate solution of the refinement equation was given by its mask, and the approximate sampling theorem for bivariate continuous function was proved by applying the approximate solution. The approximate sampling function defined uniquely by the mask of the refinement equation is the approximate solution of the equation, a piece-wise linear function, and posseses an explicit computation formula. Therefore the mask of the refinement equation is selected according to one' s requirement, so that one may controll the decay speed of the approximate sampling function.

  18. Effect of patient positions on measurement errors of the knee-joint space on radiographs

    Science.gov (United States)

    Gilewska, Grazyna

    2001-08-01

    Osteoarthritis (OA) is one of the most important health problems these days. It is one of the most frequent causes of pain and disability of middle-aged and old people. Nowadays the radiograph is the most economic and available tool to evaluate changes in OA. Error of performance of radiographs of knee joint is the basic problem of their evaluation for clinical research. The purpose of evaluation of such radiographs in my study was measuring the knee-joint space on several radiographs performed at defined intervals. Attempt at evaluating errors caused by a radiologist of a patient was presented in this study. These errors resulted mainly from either incorrect conditions of performance or from a patient's fault. Once we have information about size of the errors, we will be able to assess which of these elements have the greatest influence on accuracy and repeatability of measurements of knee-joint space. And consequently we will be able to minimize their sources.

  19. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  20. IDENTIFICATION AND CORRECTION OF COORDINATE MEASURING MACHINE GEOMETRICAL ERRORS USING LASERTRACER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2013-12-01

    Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.

  1. Determination of error measurement by means of the basic magnetization curve

    Science.gov (United States)

    Lankin, M. V.; Lankin, A. M.

    2016-04-01

    The article describes the implementation of the methodology for determining the error search by means of the basic magnetization curve of electric cutting machines. The basic magnetization curve of the integrated operation of the electric characteristic allows one to define a fault type. In the process of measurement the definition of error calculation of the basic magnetization curve plays a major role as in accuracies of a particular characteristic can have a deleterious effect.

  2. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    Science.gov (United States)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  3. Position determination and measurement error analysis for the spherical proof mass with optical shadow sensing

    Science.gov (United States)

    Hou, Zhendong; Wang, Zhaokui; Zhang, Yulin

    2016-09-01

    To meet the very demanding requirements for space gravity detection, the gravitational reference sensor (GRS) as the key payload needs to offer the relative position of the proof mass with extraordinarily high precision and low disturbance. The position determination and error analysis for the GRS with a spherical proof mass is addressed. Firstly the concept of measuring the freely falling proof mass with optical shadow sensors is presented. Then, based on the optical signal model, the general formula for position determination is derived. Two types of measurement system are proposed, for which the analytical solution to the three-dimensional position can be attained. Thirdly, with the assumption of Gaussian beams, the error propagation models for the variation of spot size and optical power, the effect of beam divergence, the chattering of beam center, and the deviation of beam direction are given respectively. Finally, the numerical simulations taken into account of the model uncertainty of beam divergence, spherical edge and beam diffraction are carried out to validate the performance of the error propagation models. The results show that these models can be used to estimate the effect of error source with an acceptable accuracy which is better than 20%. Moreover, the simulation for the three-dimensional position determination with one of the proposed measurement system shows that the position error is just comparable to the error of the output of each sensor.

  4. Estimation of bias errors in angle-of-arrival measurements using platform motion

    Science.gov (United States)

    Grindlay, A.

    1981-08-01

    An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.

  5. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  6. Research of measurement errors caused by salt solution temperature drift in surface plasmon resonance sensors

    Institute of Scientific and Technical Information of China (English)

    Yingcai Wu; Zhengtian Gu; YifangYuan

    2006-01-01

    @@ Influence of temperature on measurement of surface plasmon resonance (SPR) sensor was investigated.Samples with various concentrations of NaCI were tested at different temperatures. It was shown that if the affection of temperature could be neglected, measurement precision of salt solution was 0.028 wt.-%.But measurement error of salinity caused by temperature was 0.53 wt.-% in average when the temperature drift was 1 ℃. To reduce the error, a double-cell SPR sensor with salt solution and distilled water flowing respectively and at the same temperature was implemented.

  7. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Science.gov (United States)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major

  8. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  9. The effect of genotyping errors on the robustness of composite linkage disequilibrium measures

    Indian Academy of Sciences (India)

    Yu Mei Li; Yang Xiang

    2011-12-01

    We conclude that composite linkage disequilibrium (LD) measures be adopted in population-based LD mapping or association mapping studies since it is unaffected by Hardy–Weinberg disequilibrium. Although some properties of composite LD measures have been recently studied, the effects of genotyping errors on composite LD measures have not been examined. In this report, we derived deterministic formulas to evaluate the impact of genotyping errors on the composite LD measures $\\Delta'_{AB}$ and $r_{AB}$, and compared the robustness of $\\Delta'_{AB}$ and $r_{AB}$ in the presence of genotyping errors. The results showed that $\\Delta'_{AB}$ and $r_{AB}$ depend on the allele frequencies and the assumed error model, and show varying degrees of robustness in the presence of errors. In general, whether there is HWD or not, $r_{AB}$ is more robust than $\\Delta'_{AB}$ except some special cases and the difference of robustness between $\\Delta'_{AB}$ and $r_{AB}$ becomes less severe as the difference between the frequencies of two SNP alleles and becomes smaller.

  10. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  11. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  12. Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors

    Institute of Scientific and Technical Information of China (English)

    Xiao-bing ZHAO; Xian ZHOU

    2012-01-01

    This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.

  13. A Universal Generator for Bivariate Log-Concave Distributions

    OpenAIRE

    Hörmann, Wolfgang

    1995-01-01

    Different universal (also called automatic or black-box) methods have been suggested to sample from univariate log-concave distributions. The description of a universal generator for bivariate distributions has not been published up to now. The new algorithm for bivariate log-concave distributions is based on the method of transformed density rejection. In order to construct a hat function for a rejection algorithm the bivariate density is transformed by the logarithm into a concave function....

  14. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  15. Minimum-Energy Bivariate Wavelet Frame with Arbitrary Dilation Matrix

    Directory of Open Access Journals (Sweden)

    Fengjuan Zhu

    2013-01-01

    Full Text Available In order to characterize the bivariate signals, minimum-energy bivariate wavelet frames with arbitrary dilation matrix are studied, which are based on superiority of the minimum-energy frame and the significant properties of bivariate wavelet. Firstly, the concept of minimum-energy bivariate wavelet frame is defined, and its equivalent characterizations and a necessary condition are presented. Secondly, based on polyphase form of symbol functions of scaling function and wavelet function, two sufficient conditions and an explicit constructed method are given. Finally, the decomposition algorithm, reconstruction algorithm, and numerical examples are designed.

  16. Error rates of Belavkin weighted quantum measurements and a converse to Holevo's asymptotic optimality theorem

    OpenAIRE

    Tyson, Jon

    2009-01-01

    We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a we...

  17. Simulation of Current Measurement Using Magnetic Sensor Arrays and Its Error Model

    Institute of Scientific and Technical Information of China (English)

    WANGJing; YAOJian-jun; WANGJian-hua

    2004-01-01

    Magnetic sensor arrays are proposed to measure electric current in a non-contac tway. In order to achieve higher accuracy, signal processing techniques for magnetic sensor arrays are utilized. Simulation techniques are necessary to study the factors influencing the accuracy of current measurement. This paper presents a simulation method to estimate the impact of sensing area and position of sensors on the accuracy of current measurement. Several error models are built up to support computer-aided design of magnetic sensor arrays.

  18. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  19. Reliability for some bivariate exponential distributions

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available In the area of stress-strength models, there has been a large amount of work as regards estimation of the reliability R=Pr⁡(Xbivariate distributions with dependence between X and Y. In particular, explicit expressions for R are derived when the joint distribution isbivariate exponential. The calculations involve the use of special functions. An application of the results is also provided.

  20. Analysis of error in measurement and resolution of electronic speckle photography in material testing

    CERN Document Server

    Feiel, R

    1999-01-01

    Causes and magnitude of error in measurement and resolution are investigated for electronic speckle photography (ESP), which is used like a strain gauge in material testing. For this purpose a model of the rough surface which allows the description of cross correlation of speckle images under the influence of material strain is developed. The process through which material strain leads to decorrelation of speckle images is shown. The error in measurement which is caused by defocused imaging and statistical errors in the displacement estimation of speckle images is investigated theoretically. The results are supported by simulations and experiments. Moreover the resolution of ESP can be improved through increased optical magnification as well as adjusted aperture. Resolutions which are usually considered to be accessible only to interferometric techniques are achieved.

  1. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  2. Bivariate Rayleigh Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2007-01-01

    Full Text Available Rayleigh (1880 observed that the sea waves follow no law because of the complexities of the sea, but it has been seen that the probability distributions of wave heights, wave length, wave induce pitch, wave and heave motions of the ships follow the Rayleigh distribution. At present, several different quantities are in use for describing the state of the sea; for example, the mean height of the waves, the root mean square height, the height of the “significant waves” (the mean height of the highest one-third of all the waves the maximum height over a given interval of the time, and so on. At present, the ship building industry knows less than any other construction industry about the service conditions under which it must operate. Only small efforts have been made to establish the stresses and motions and to incorporate the result of such studies in to design. This is due to the complexity of the problem caused by the extensive variability of the sea and the corresponding response of the ships. Although the problem appears feasible, yet it is possible to predict service conditions for ships in an orderly and relatively simple manner Rayleigh (1980 derived it from the amplitude of sound resulting from many independent sources. This distribution is also connected with one or two dimensions and is sometimes referred to as “random walk” frequency distribution. The Rayleigh distribution can be derived from the bivariate normal distribution when the variate are independent and random with equal variances. We try to construct bivariate Rayleigh distribution with marginal Rayleigh distribution function and discuss its fundamental properties.

  3. Height curves based on the bivariate Power-Normal and the bivariate Johnson’s System bounded distribution

    OpenAIRE

    Mønness, Erik Neslein

    2013-01-01

    English: Often, a forest stand is modeled with a diameter distribution and a height curve as somehow separate tasks. A bivariate height and diameter distribution yield a unified model of a forest stand. The conditional median height given the diameter is a possible height curve. Here the bivariate Johnson’s System bounded distribution and the bivariate power-normal distribution are evaluated and compared with a simple hyperbolic height curve. Evaluated by the deviance, the hyperbo...

  4. Errors in the measurement of non-Gaussian noise spectra using rf spectrum analyzers

    International Nuclear Information System (INIS)

    We discuss the nature of errors which may occur when the spectra of random signals not obeying Gaussian statistics are measured with typical rf spectrum analyzers. These errors depend on both the noise statistics and the process used to detect the random signal after it has been passed through a narrow bandpass filter within the spectrum analyzer. In general, for random signals not obeying Gaussian statistics, the output of the bandpass filter must be measured with a square law detector if the resulting measurement is to be strictly proportional to the power spectrum of the input signal. We compare measurements of the power spectra of non-Gaussian noise using a commerical spectrum analyzer with its resident envelope detector, with measurements by the same analyzer fitted with a square law detector. Differences of about 5% were observed

  5. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    Science.gov (United States)

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing

  6. Measurement error of self-reported physical activity levels in New York City: assessment and correction.

    Science.gov (United States)

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-05-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.

  7. Investigation of nonlinearity as an error source in strain gauge measurements of high elongations, and comparison with other measuring methods

    International Nuclear Information System (INIS)

    High elongation measurement using strain gauges presents problems with regard to accuracy of results, emanating on the one hand from the measuring technique applied (bridge linearity at constant current or constant voltage), or from the strain gauge itself (k factor). Error correction has to take into account all parameters influencing the electric signal, as certain effects are opposite in their signs. The maximum deviations of the elongations measured by the various measuring devices in comparison with true elongation vary with the measuring technique applied, and within the elongation range investigated (0-0.1 m/m) may reach a maximum between 1 p.c. and 11 p.c.. Measurements with equipment using constant current or constant voltage supply have shown to be also appropriate in the high elongation range, if their specific errors within ≤ p.c. are duly corrected. (orig.)

  8. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander;

    2013-01-01

    variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...

  9. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    Science.gov (United States)

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  10. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  11. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    -specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  12. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  13. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias......, and the limitations imposed in active control of structural vibration based upon a strategy of power minimisation....

  14. Improved error separation technique for on-machine optical lens measurement

    Science.gov (United States)

    Fu, Xingyu; Bing, Guo; Zhao, Qingliang; Rao, Zhimin; Cheng, Kai; Mulenga, Kabwe

    2016-04-01

    This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.

  15. A Robust Skin Colour Segmentation Using Bivariate Pearson Type IIαα (Bivariate Beta Mixture Model

    Directory of Open Access Journals (Sweden)

    B.N.Jagadesh

    2012-10-01

    Full Text Available Probability distributions formulate the basic framework for developing several segmentation algorithms. Among the various segmentation algorithms, skin colour segmentation is one of the most important algorithms for human computer interaction. Due to various random factors influencing the colour space, there does not exist a unique algorithm which serve the purpose of all images. In this paper a novel and new skin colour segmentation algorithms is proposed based on bivariate Pearson type II mixture model since the hue and saturation values always lies between 0 and 1. The bivariate feature vector of the human image is to be modeled with a Pearson type II mixture (bivariate Beta mixture model. Using the EM Algorithm the model parameters are estimated. The segmentation algorithm is developed under Bayesian frame. Through experimentation the proposed skin colour segmentation algorithm performs better with respect to segmentation quality metrics such as PRI, VOI and GCE. The ROC curves plotted for the system also revealed that the proposed algorithm can segment the skin colour more effectively than the algorithm with Gaussian mixture model for some images.

  16. Effects of Spectral Error in Efficiency Measurements of GaInAs-Based Concentrator Solar Cells

    Energy Technology Data Exchange (ETDEWEB)

    Osterwald, C. R.; Wanlass, M. W.; Moriarty, T.; Steiner, M. A.; Emery, K. A.

    2014-03-01

    This technical report documents a particular error in efficiency measurements of triple-absorber concentrator solar cells caused by incorrect spectral irradiance -- specifically, one that occurs when the irradiance from unfiltered, pulsed xenon solar simulators into the GaInAs bottom subcell is too high. For cells designed so that the light-generated photocurrents in the three subcells are nearly equal, this condition can cause a large increase in the measured fill factor, which, in turn, causes a significant artificial increase in the efficiency. The error is readily apparent when the data under concentration are compared to measurements with correctly balanced photocurrents, and manifests itself as discontinuities in plots of fill factor and efficiency versus concentration ratio. In this work, we simulate the magnitudes and effects of this error with a device-level model of two concentrator cell designs, and demonstrate how a new Spectrolab, Inc., Model 460 Tunable-High Intensity Pulsed Solar Simulator (T-HIPSS) can mitigate the error.

  17. Bivariate Recursive Equations on Excess-of-loss Reinsurance

    Institute of Scientific and Technical Information of China (English)

    Jing Ping YANG; Shi Hong CHENG; Xiao Qian WANG

    2007-01-01

    This paper investigates bivariate recursive equations on excess-of-loss reinsurance.For an insurance portfolio, under the assumptions that the individual claim severity distribution has bounded continuous density and the number of claims belongs to R1(a,b) family, bivariate recursive equations for the joint distribution of the cedent's aggregate claims and the reinsurer's aggre gate claims are obtained.

  18. Measurement and simulation of clock errors from resource-constrained embedded systems

    International Nuclear Information System (INIS)

    Resource-constrained embedded systems such as wireless sensor networks are becoming increasingly sought-after in a range of critical sensing applications. Hardware for such systems is typically developed as a general tool, intended for research and flexibility. These systems often have unexpected limitations and sources of error when being implemented for specific applications. We investigate via measurement and simulation the output of the onboard clock of a Crossbow MICAz testbed, comprising a quartz oscillator accessed via a combination of hardware and software. We show that the clock output available to the user suffers a number of instabilities and errors. Using a simple software simulation of the system based on a series of nested loops, we identify the source of each component of the error, finding that there is a 7.5 × 10−6 probability that a given oscillation from the governing crystal will be miscounted, resulting in frequency jitter over a 60 µHz range

  19. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-05-04

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  20. An Assessment of Errors and Their Reduction in Terrestrial Laser Scanner Measurements in Marmorean Surfaces

    Science.gov (United States)

    Garcia-Fernandez, Jorge

    2016-03-01

    The need for accurate documentation for the preservation of cultural heritage has prompted the use of terrestrial laser scanner (TLS) in this discipline. Its study in the heritage context has been focused on opaque surfaces with lambertian reflectance, while translucent and anisotropic materials remain a major challenge. The use of TLS for the mentioned materials is subject to significant distortion in measure due to the optical properties under the laser stimulation. The distortion makes the measurement by range not suitable for digital modelling in a wide range of cases. The purpose of this paper is to illustrate and discuss the deficiencies and their resulting errors in marmorean surfaces documentation using TLS based on time-of-flight and phase-shift. Also proposed in this paper is the reduction of error in depth measurement by adjustment of the incidence laser beam. The analysis is conducted by controlled experiments.

  1. Long-term continuous acoustical suspended-sediment measurements in rivers - Theory, application, bias, and error

    Science.gov (United States)

    Topping, David J.; Wright, Scott A.

    2016-01-01

    these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.

  2. Evaluating measurement error in readings of blood pressure for adolescents and young adults.

    Science.gov (United States)

    Bauldry, Shawn; Bollen, Kenneth A; Adair, Linda S

    2015-04-01

    Readings of blood pressure are known to be subject to measurement error, but the optimal method for combining multiple readings is unknown. This study assesses different sources of measurement error in blood pressure readings and assesses methods for combining multiple readings using data from a sample of adolescents/young adults who were part of a longitudinal epidemiological study based in Cebu, Philippines. Three sets of blood pressure readings were collected at 2-year intervals for 2127 adolescents and young adults as part of the Cebu National Longitudinal Health and Nutrition Study. Multi-trait, multi-method (MTMM) structural equation models in different groups were used to decompose measurement error in the blood pressure readings into systematic and random components and to examine patterns in the measurement across males and females and over time. The results reveal differences in the measurement properties of blood pressure readings by sex and over time that suggest the combination of multiple readings should be handled separately for these groups at different time points. The results indicate that an average (mean) of the blood pressure readings has high validity relative to a more complicated factor-score-based linear combination of the readings. PMID:25548966

  3. SANG-a kernel density estimator incorporating information about the measurement error

    Science.gov (United States)

    Hayes, Robert

    Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.

  4. Invited Review Article: Error and uncertainty in Raman thermal conductivity measurements.

    Science.gov (United States)

    Beechem, Thomas; Yates, Luke; Graham, Samuel

    2015-04-01

    Error and uncertainty in Raman thermal conductivity measurements are investigated via finite element based numerical simulation of two geometries often employed—Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter—termed the Raman stress factor—is derived to identify when stress effects will induce large levels of error. Taken together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.

  5. Influence of sky radiance measurement errors on inversion-retrieved aerosol properties

    Energy Technology Data Exchange (ETDEWEB)

    Torres, B.; Toledano, C.; Cachorro, V. E.; Bennouna, Y. S.; Fuertes, D.; Gonzalez, R.; Frutos, A. M. de [Atmospheric Optics Group (GOA), University of Valladolid, Valladolid (Spain); Berjon, A. J. [Izana Atmospheric Research Center, Meteorological State Agency of Spain (AEMET), Sta. Cruz de Tenerife (Spain); Dubovik, O.; Goloub, P.; Podvin, T.; Blarel, L. [Laboratory of Atmospheric Optics, Universite Lille 1, Villeneuve d' Ascq (France)

    2013-05-10

    Remote sensing of the atmospheric aerosol is a well-established technique that is currently used for routine monitoring of this atmospheric component, both from ground-based and satellite. The AERONET program, initiated in the 90's, is the most extended network and the data provided are currently used by a wide community of users for aerosol characterization, satellite and model validation and synergetic use with other instrumentation (lidar, in-situ, etc.). Aerosol properties are derived within the network from measurements made by ground-based Sun-sky scanning radiometers. Sky radiances are acquired in two geometries: almucantar and principal plane. Discrepancies in the products obtained following both geometries have been observed and the main aim of this work is to determine if they could be justified by measurement errors. Three systematic errors have been analyzed in order to quantify the effects on the inversion-derived aerosol properties: calibration, pointing accuracy and finite field of view. Simulations have shown that typical uncertainty in the analyzed quantities (5% in calibration, 0.2 Degree-Sign in pointing and 1.2 Degree-Sign field of view) yields to errors in the retrieved parameters that vary depending on the aerosol type and geometry. While calibration and pointing errors have relevant impact on the products, the finite field of view does not produce notable differences.

  6. Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

  7. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  8. The bivariate Rogers Szegö polynomials

    Science.gov (United States)

    Chen, William Y. C.; Saad, Husam L.; Sun, Lisa H.

    2007-06-01

    We present an operator approach to deriving Mehler's formula and the Rogers formula for the bivariate Rogers-Szegö polynomials hn(x, y|q). The proof of Mehler's formula can be considered as a new approach to the nonsymmetric Poisson kernel formula for the continuous big q-Hermite polynomials Hn(x; a|q) due to Askey, Rahman and Suslov. Mehler's formula for hn(x, y|q) involves a 3phi2 sum and the Rogers formula involves a 2phi1 sum. The proofs of these results are based on parameter augmentation with respect to the q-exponential operator and the homogeneous q-shift operator in two variables. By extending recent results on the Rogers-Szegö polynomials hn(x|q) due to Hou, Lascoux and Mu, we obtain another Rogers-type formula for hn(x, y|q). Finally, we give a change of base formula for Hn(x; a|q) which can be used to evaluate some integrals by using the Askey-Wilson integral.

  9. The bivariate Rogers-Szegoe polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Chen, William Y C [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China); Saad, Husam L [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China); Sun, Lisa H [Center for Combinatorics, LPMC, Nankai University, Tianjin 300071 (China)

    2007-06-08

    We present an operator approach to deriving Mehler's formula and the Rogers formula for the bivariate Rogers-Szegoe polynomials h{sub n}(x, y vertical bar q). The proof of Mehler's formula can be considered as a new approach to the nonsymmetric Poisson kernel formula for the continuous big q-Hermite polynomials H{sub n}(x; a vertical bar q) due to Askey, Rahman and Suslov. Mehler's formula for h{sub n}(x, y vertical bar q) involves a {sub 3}{phi}{sub 2} sum and the Rogers formula involves a {sub 2}{phi}{sub 1} sum. The proofs of these results are based on parameter augmentation with respect to the q-exponential operator and the homogeneous q-shift operator in two variables. By extending recent results on the Rogers-Szegoe polynomials h{sub n}(x vertical bar q) due to Hou, Lascoux and Mu, we obtain another Rogers-type formula for h{sub n}(x, y vertical bar q). Finally, we give a change of base formula for H{sub n}(x; a vertical bar q) which can be used to evaluate some integrals by using the Askey-Wilson integral.

  10. Error analysis and measurement uncertainty for a fiber grating strain-temperature sensor.

    Science.gov (United States)

    Tang, Jaw-Luen; Wang, Jian-Neng

    2010-01-01

    A fiber grating sensor capable of distinguishing between temperature and strain, using a reference and a dual-wavelength fiber Bragg grating, is presented. Error analysis and measurement uncertainty for this sensor are studied theoretically and experimentally. The measured root mean squared errors for temperature T and strain ε were estimated to be 0.13 °C and 6 με, respectively. The maximum errors for temperature and strain were calculated as 0.00155 T + 2.90 × 10(-6) ε and 3.59 × 10(-5) ε + 0.01887 T, respectively. Using the estimation of expanded uncertainty at 95% confidence level with a coverage factor of k = 2.205, temperature and strain measurement uncertainties were evaluated as 2.60 °C and 32.05 με, respectively. For the first time, to our knowledge, we have demonstrated the feasibility of estimating the measurement uncertainty for simultaneous strain-temperature sensing with such a fiber grating sensor.

  11. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    Science.gov (United States)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  12. THE ASYMPTOTIC DISTRIBUTIONS OF EMPIRICAL LIKELIHOOD RATIO STATISTICS IN THE PRESENCE OF MEASUREMENT ERROR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Suppose that several different imperfect instruments and one perfect instrument are independently used to measure some characteristics of a population. Thus, measurements of two or more sets of samples with varying accuracies are obtained. Statistical inference should be based on the pooled samples. In this article, the authors also assumes that all the imperfect instruments are unbiased. They consider the problem of combining this information to make statistical tests for parameters more relevant. They define the empirical likelihood ratio functions and obtain their asymptotic distributions in the presence of measurement error.

  13. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  14. Experimental validation of error in temperature measurements in thin walled ductile iron castings

    DEFF Research Database (Denmark)

    Pedersen, Karl Martin; Tiedje, Niels Skat

    2007-01-01

    thicknesses between 2 and 4.3 mm. The thermocouples were accurately placed at the same distance from the surface of the casting for different plate thicknesses. It is shown that when measuring the temperature in plates with thickness between 2 and 4.3 mm the measured temperature will be parallel shifted to a...... level about 20C lower than the actual temperature in the casting. Factors affecting the measurement error (oxide layer on the thermocouple wire, penetration into the ceramic tube and variation in placement of thermocouple) are discussed. Finally, it is shown how useful cooling curve may be obtained in...

  15. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  16. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  17. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep.

    Science.gov (United States)

    Crainiceanu, Ciprian M; Caffo, Brian S; Di, Chong-Zhi; Punjabi, Naresh M

    2009-06-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS.

  18. Optimal sparse volatility matrix estimation for high-dimensional Itô processes with measurement errors

    OpenAIRE

    Tao, Minjing; Wang, Yazhen; Harrison H. Zhou

    2013-01-01

    Stochastic processes are often used to model complex scientific problems in fields ranging from biology and finance to engineering and physical science. This paper investigates rate-optimal estimation of the volatility matrix of a high-dimensional It\\^{o} process observed with measurement errors at discrete time points. The minimax rate of convergence is established for estimating sparse volatility matrices. By combining the multi-scale and threshold approaches we construct a volatility matri...

  19. Measurement errors in multifrequency bioelectrical impedance analyzers with and without impedance electrode mismatch.

    OpenAIRE

    Bogónez Franco, Francisco; Nescolarde Selva, Lexa Digna; Bragós Bardia, Ramon; Rosell Ferrer, Francisco Javier; Yandiola, Iñigo

    2009-01-01

    The purpose of this study is to compare measurement errors in two commercially available multi-frequency bioimpedance analyzers, a Xitron 4000B and an ImpediMed SFB7, including electrode impedance mismatch. The comparison was made using resistive electrical models and in ten human volunteers. We used three different electrical models simulating three different body segments: the right-side, leg and thorax. In the electrical models, we tested the effect of the capacitive coupling of the ...

  20. Error-control and processes optimization of (223/224)Ra measurement using Delayed Coincidence Counter (RaDeCC).

    Science.gov (United States)

    Xiaoqing, Cheng; Lixin, Yi; Lingling, Liu; Guoqiang, Tang; Zhidong, Wang

    2015-11-01

    RaDeCC has proved to be a precise and standard way to measure (224)Ra and (223)Ra in water samples and successfully made radium a tracer of several environmental processes. In this paper, the relative errors of (224)Ra and (223)Ra measurement in water samples via a Radium Delayed Coincidence Count system are analyzed through performing coincidence correction calculations and error propagation. The calculated relative errors range of 2.6% ∼ 10.6% for (224)Ra and 9.6% ∼ 14.2% for (223)Ra. For different radium activities, effects of decay days and counting time on final radium relative errors are evaluated and the results show that these relative errors can decrease by adjusting the two measurement factors. Finally, to minimize propagated errors in Radium activity, a set of optimized RaDeCC measurement parameters are proposed. PMID:26233651

  1. Systematic Continuum Errors in the Lyman-Alpha Forest and The Measured Temperature-Density Relation

    CERN Document Server

    Lee, Khee-Gan

    2011-01-01

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parametrized as a power-law, T ~ \\Delta^{\\gamma - 1}) of the inter-galactic medium (IGM) through the flux probability distribution function (PDF) of the Lyman-\\alpha\\ forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to e.g. a uniform low-absorption Gunn-Peterson component, could lead to errors in {\\gamma} of order unity. This is quantified further using a simple semi-analytic model of the Lya forest flux PDF. We find that under-(over-)estimates in the continuum level can lead to a lower (higher) measured value of \\gamma . At current observational limits, continuum biases significantly increase the error in {\\gamma} from \\sigma_{\\gamma} = 0.1 to \\sigma_{\\gamma} = 0.3 within our model. We argue that steps need to be taken to directly estimate the level of continuum bias in order to make recent cla...

  2. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    Directory of Open Access Journals (Sweden)

    Peter L. McCall

    2001-01-01

    Full Text Available Beginning in 1990, the US Army Aberdeen Test Center (ATC began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the vehicle was moving. Research and investigative work was conducted through a collaborative effort with the US Army Research Laboratory, Benet Laboratory, Project Manager – Tank Main Armament Systems, US Army Research and Engineering Center, and Cadillac Gage Textron Inc. New test methods, instrumentation, data analysis procedures, and stabilization control design resulted through this series of investigations into the dynamic tube flexure error source. Through this joint research, improvements in tank fire control design have been developed to improve delivery accuracy. This paper discusses the instrumentation implemented, methods applied, and analysis procedures used to characterize the tube flexure during dynamic tests of a main battle tank and the relationship between gun pointing error and muzzle pointing error.

  3. Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements

    Science.gov (United States)

    Kappel, David; Haus, Rainer; Arnold, Gabriele

    2015-08-01

    Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the

  4. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  5. Copula bivariate probit models: with an application to medical expenditures.

    Science.gov (United States)

    Winkelmann, Rainer

    2012-12-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. PMID:22025413

  6. A Bivariate Analogue to the Composed Product of Polynomials

    Institute of Scientific and Technical Information of China (English)

    Donald Mills; Kent M. Neuerburg

    2003-01-01

    The concept of a composed product for univariate polynomials has been explored extensively by Brawley, Brown, Carlitz, Gao,Mills, et al. Starting with these fundamental ideas andutilizing fractional power series representation(in particular, the Puiseux expansion) of bivariate polynomials, we generalize the univariate results. We define a bivariate composed sum,composed multiplication,and composed product (based on function composition). Further, we investigate the algebraic structure of certain classes of bivariate polynomials under these operations. We also generalize a result of Brawley and Carlitz concerningthe decomposition of polynomials into irreducibles.

  7. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    Science.gov (United States)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  8. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  9. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important.

  10. Development of a simulation program to study error propagation in the reprocessing input accountancy measurements

    International Nuclear Information System (INIS)

    A physical model and a computer program have been developed to simulate all the measurement operations involved with the Isotopic Dilution Analysis technique currently applied in the Volume - Concentration method for the Reprocessing Input Accountancy, together with their errors or uncertainties. The simulator is apt to easily solve a number of problems related to the measurement sctivities of the plant operator and the inspector. The program, written in Fortran 77, is based on a particular Montecarlo technique named ''Random Sampling''; a full description of the code is reported

  11. An examination of errors in characteristic curve measurements of radiographic screen/film systems.

    Science.gov (United States)

    Wagner, L K; Barnes, G T; Bencomo, J A; Haus, A G

    1983-01-01

    The precision and accuracy achieved in the measurement of characteristic curves for radiographic screen/film systems is quantitatively investigated for three techniques: inverse square, kVp bootstrap, and step-wedge bootstrap. Precision of all techniques is generally better than +/- 1.5% while the agreement among all intensity-scale techniques is better than 2% over the useful exposure latitude. However, the accuracy of the sensitometry will depend on several factors, including linearity and energy dependence of the calibration instrument, that may introduce larger errors. Comparisons of time-scale and intensity-scale methods are made and a means of measuring reciprocity law failure is demonstrated. PMID:6877185

  12. Error rates of Belavkin weighted quantum measurements and a converse to Holevo's asymptotic optimality theorem

    CERN Document Server

    Tyson, Jon

    2009-01-01

    We compare several instances of pure-state Belavkin weighted square-root measurements from the standpoint of minimum-error discrimination of quantum states. The quadratically weighted measurement is proven superior to the so-called "pretty good measurement" (PGM) in a number of respects: (1) Holevo's quadratic weighting unconditionally outperforms the PGM in the case of two-state ensembles, with equality only in trivial cases. (2) A converse of a theorem of Holevo is proven, showing that a weighted measurement is asymptotically optimal only if it is quadratically weighted. Counterexamples for three states are constructed. The cube-weighted measurement of Ballester, Wehner, and Winter is also considered. Sufficient optimality conditions for various weights are compared.

  13. The effect of clock, media, and station location errors on Doppler measurement accuracy

    Science.gov (United States)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  14. Analysis of errors in the measurement of energy dissipation with two-point LDA

    Energy Technology Data Exchange (ETDEWEB)

    Ducci, A.; Yianneskis, M. [Department of Mechanical Engineering, King' s College London, Experimental and Computational Laboratory for the Analysis of Turbulence (ECLAT), London (United Kingdom)

    2005-04-01

    In the present study, an attempt has been made to identify and quantify, with a rigorous analytical approach, all possible sources of error involved in the estimation of the fluctuating velocity gradients ({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2} when a two-point laser Doppler velocimetry (LDV) technique is employed. Measurements were carried out in a grid-generated turbulence flow where the local dissipation rate can be calculated from the decay of kinetic energy. An assessment of the cumulative error determined through the analysis has been made by comparing the values of the spatial gradients directly measured with the gradient estimated from the decay of kinetic energy. The main sources of error were found to be related to the length of the two control volumes and to the fitting range, as well as the function used to interpolate the correlation coefficient when the Taylor length scale (or({partial_derivative}u{sub i}/{partial_derivative}x{sub j}){sup 2}) are estimated. (orig.)

  15. Influence of measurement errors on temperature-based death time determination.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2011-07-01

    Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.

  16. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    Science.gov (United States)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  17. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    Science.gov (United States)

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  18. Research on the influence and correction method of depth scanning error to the underwater acoustic image measurement

    Institute of Scientific and Technical Information of China (English)

    MEI Jidan; ZHAI Chunpin; WANGYilin; HUI Junying

    2011-01-01

    The technology of underwater acoustic image measurement was a passive locating method with high precision in near field. To improve the precision of underwater acoustic image measurement, the influence of the depth scan error was analyzed and the correcti

  19. A Study on Measurement Error during Alternating Current Induced Voltage Tests on Large Transformers

    Institute of Scientific and Technical Information of China (English)

    WANG Xuan; LI Yun-ge; CAO Xiao-long; LIU Ying

    2006-01-01

    The large transformer is pivotal equipment in an electric power supply system; Its partial discharge test and the induced voltage withstand test on large transformers are carried out at a frequency about twice the working frequency. If the magnetizing inductance cannot compensate for the stray capacitance, the test sample turns into a capacitive load and a capacitive rise exhibits in the testing circuit. For self-restoring insulation, a method has been recommended in IEC60-1 that an unapproved measuring system be calibrated by an approved system at a voltage not less than 50% of the rated testing voltage, and the result then be extrapolated linearly. It has been found that this method leads to great error due to the capacitive rise if it is not correctly used during a withstand voltage test under certain testing conditions, especially for a test on high voltage transformers with large capacity. Since the withstand voltage test is the most important means to examine the operation reliability of a transformer, and it can be destructive to the insulation, a precise measurement must be guaranteed. In this paper a factor, named as the capacitive rise factor, is introduced to assess the rise. The voltage measurement error during the calibration is determined by the parameters of the test sample and the testing facilities, as well as the measuring point. Based on theoretical analysis in this paper, a novel method is suggested and demonstrated to estimate the error by using the capacitive rise factor and other known parameters of the testing circuit.

  20. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    Science.gov (United States)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  1. De Novo Correction of Mass Measurement Error in Low Resolution Tandem MS Spectra for Shotgun Proteomics

    Science.gov (United States)

    Egertson, Jarrett D.; Eng, Jimmy K.; Bereman, Michael S.; Hsieh, Edward J.; Merrihew, Gennifer E.; MacCoss, Michael J.

    2012-12-01

    We report an algorithm designed for the calibration of low resolution peptide mass spectra. Our algorithm is implemented in a program called FineTune, which corrects systematic mass measurement error in 1 min, with no input required besides the mass spectra themselves. The mass measurement accuracy for a set of spectra collected on an LTQ-Velos improved 20-fold from -0.1776 ± 0.0010 m/z to 0.0078 ± 0.0006 m/z after calibration (avg ± 95 % confidence interval). The precision in mass measurement was improved due to the correction of non-linear variation in mass measurement accuracy across the m/z range.

  2. Low-error and broadband microwave frequency measurement in a silicon chip

    CERN Document Server

    Pagani, Mattia; Zhang, Yanbing; Casas-Bedoya, Alvaro; Aalto, Timo; Harjanne, Mikko; Kapulainen, Markku; Eggleton, Benjamin J; Marpaung, David

    2015-01-01

    Instantaneous frequency measurement (IFM) of microwave signals is a fundamental functionality for applications ranging from electronic warfare to biomedical technology. Photonic techniques, and nonlinear optical interactions in particular, have the potential to broaden the frequency measurement range beyond the limits of electronic IFM systems. The key lies in efficiently harnessing optical mixing in an integrated nonlinear platform, with low losses. In this work, we exploit the low loss of a 35 cm long, thick silicon waveguide, to efficiently harness Kerr nonlinearity, and demonstrate the first on-chip four-wave mixing (FWM) based IFM system. We achieve a large 40 GHz measurement bandwidth and record-low measurement error. Finally, we discuss the future prospect of integrating the whole IFM system on a silicon chip to enable the first reconfigurable, broadband IFM receiver with low-latency.

  3. Copula bivariate probit models: with an application to medical expenditures

    OpenAIRE

    Winkelmann, Rainer

    2011-01-01

    The bivariate probit model is frequently used for estimating the eff*ect of an endogenous binary regressor (the "treatment") on a binary health outcome variable. This paper discusses simple modifi*cations that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank ...

  4. Measurement errors in multifrequency bioelectrical impedance analyzers with and without impedance electrode mismatch

    International Nuclear Information System (INIS)

    The purpose of this study is to compare measurement errors in two commercially available multi-frequency bioimpedance analyzers, a Xitron 4000B and an ImpediMed SFB7, including electrode impedance mismatch. The comparison was made using resistive electrical models and in ten human volunteers. We used three different electrical models simulating three different body segments: the right-side, leg and thorax. In the electrical models, we tested the effect of the capacitive coupling of the patient to ground and the skin–electrode impedance mismatch. Results showed that both sets of equipment are optimized for right-side measurements and for moderate skin–electrode impedance mismatch. In right-side measurements with mismatch electrode, 4000B is more accurate than SFB7. When an electrode impedance mismatch was simulated, errors increased in both bioimpedance analyzers and the effect of the mismatch in the voltage detection leads was greater than that in current injection leads. For segments with lower impedance as the leg and thorax, SFB7 is more accurate than 4000B and also shows less dependence on electrode mismatch. In both devices, impedance measurements were not significantly affected (p > 0.05) by the capacitive coupling to ground

  5. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    Science.gov (United States)

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-01

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  6. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    Science.gov (United States)

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-01

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  7. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    Science.gov (United States)

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  8. First measurements of error fields on W7-X using flux surface mapping

    Science.gov (United States)

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; Biedermann, Christoph; Pedersen, Thomas Sunn; the W7-X Team

    2016-10-01

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field ‘{\\rlap- \\iota} =1/2 ’ magnetic configuration ({\\rlap- \\iota} =\\iota /2π ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m  =  1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m  =  5/10 island chain should be present. Modeling indicates that if an n  =  1 perturbing field is applied by the trim coils, a large n/m  =  1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small ∼ 0.04 m intrinsic island chain with a {{130}\\circ} phase relative to the first module of the W7-X experiment. These error fields are determined to be small and easily correctable by the trim coil system. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  9. Measuring The Influence of TAsk COMplexity on Human Error Probability: An Empirical Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Podofillini, Luca; Dang, Vinh N. [Paul Scherrer Institute, Villigen (Switzerland)

    2013-04-15

    A key input for the assessment of Human Error Probabilities (HEPs) with Human Reliability Analysis (HRA) methods is the evaluation of the factors influencing the human performance (often referred to as Performance Shaping Factors, PSFs). In general, the definition of these factors and the supporting guidance are such that their evaluation involves significant subjectivity. This affects the repeatability of HRA results as well as the collection of HRA data for model construction and verification. In this context, the present paper considers the TAsk COMplexity (TACOM) measure, developed by one of the authors to quantify the complexity of procedure-guided tasks (by the operating crew of nuclear power plants in emergency situations), and evaluates its use to represent (objectively and quantitatively) task complexity issues relevant to HRA methods. In particular, TACOM scores are calculated for five Human Failure Events (HFEs) for which empirical evidence on the HEPs (albeit with large uncertainty) and influencing factors are available from the International HRA Empirical Study. The empirical evaluation has shown promising results. The TACOM score increases as the empirical HEP of the selected HFEs increases. Except for one case, TACOM scores are well distinguished if related to different difficulty categories (e. g., 'easy' vs. 'somewhat difficult'), while values corresponding to tasks within the same category are very close. Despite some important limitations related to the small number of HFEs investigated and the large uncertainty in their HEPs, this paper presents one of few attempts to empirically study the effect of a performance shaping factor on the human error probability. This type of study is important to enhance the empirical basis of HRA methods, to make sure that 1) the definitions of the PSFs cover the influences important for HRA (i. e., influencing the error probability), and 2) the quantitative relationships among PSFs and error

  10. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.;

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature and f......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations....

  11. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient.

    Science.gov (United States)

    Hindasageri, V; Vedula, R P; Prabhu, S V

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.

  12. Thermocouple error correction for measuring the flame temperature with determination of emissivity and heat transfer coefficient.

    Science.gov (United States)

    Hindasageri, V; Vedula, R P; Prabhu, S V

    2013-02-01

    Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively. PMID:23464237

  13. Reliability, technical error of measurements and validity of length and weight measurements for children under two years old in Malaysia.

    Science.gov (United States)

    Jamaiyah, H; Geeta, A; Safiza, M N; Khor, G L; Wong, N F; Kee, C C; Rahmah, R; Ahmad, A Z; Suzana, S; Chen, W S; Rajaah, M; Adam, B

    2010-06-01

    The National Health and Morbidity Survey III 2006 wanted to perform anthropometric measurements (length and weight) for children in their survey. However there is limited literature on the reliability, technical error of measurement (TEM) and validity of these two measurements. This study assessed the above properties of length (LT) and weight (WT) measurements in 130 children age below two years, from the Hospital Universiti Kebangsaan Malaysia (HUKM) paediatric outpatient clinics, during the period of December 2005 to January 2006. Two trained nurses measured WT using Tanita digital infant scale model 1583, Japan (0.01kg) and Seca beam scale, Germany (0.01 kg) and LT using Seca measuring mat, Germany (0.1cm) and Sensormedics stadiometer model 2130 (0.1cm). Findings showed high inter and intra-examiner reliability using 'change in the mean' and 'intraclass correlation' (ICC) for WT and LT. However, LT was found to be less reliable using the 'Bland and Altman plot'. This was also true using Relative TEMs, where the TEM value of LT was slightly more than the acceptable limit. The test instruments were highly valid for WT using 'change in the mean' and 'ICC' but was less valid for LT measurement. In spite of this we concluded that, WT and LT measurements in children below two years old using the test instruments were reliable and valid for a community survey such as NHMS III within the limits of their error. We recommend that LT measurements be given special attention to improve its reliability and validity. PMID:21488474

  14. Job Changes and Wage Changes: Estimation with Measurement Error in a Binary Variable*

    OpenAIRE

    Bergin, Adele

    2013-01-01

    Many studies of labour market dynamics use survey data so it is valuable to know about the quality of the data collected. This paper investigates job transitions in Ireland over the period 1995 to 2001, using the Living in Ireland Survey, the Irish component of the European Community Household Panel. In applied work on job mobility, researchers often have to rely on self-reported accounts of tenure to determine whether or not a job change has taken place. There may be measurement error in the...

  15. Research on Proximity Magnetic Field Influence in Measuring Error of Active Electronic Current Transformers

    Directory of Open Access Journals (Sweden)

    Wu Weijiang

    2016-01-01

    Full Text Available The principles of the active electronic current transformer (ECT are introduced, and the mechanism of how a proximity magnetic field can influence the measuring of errors is analyzed from the perspective of the sensor section of the ECT. The impacts on active ECTs created by three-phase proximity magnetic field with invariable distance and variable distance are simulated and analyzed. The theory and simulated analysis indicate that the active ECTs are sensitive to proximity magnetic field under certain conditions. According to simulated analysis, a product structural design and the location of transformers at substation sites are suggested for manufacturers and administration of power supply, respectively.

  16. The Inversion of NMR Log Data Sets with Different Measurement Errors

    Science.gov (United States)

    Dunn, Keh-Jim; LaTorraca, Gerald A.

    1999-09-01

    We present a composite-data processing method which simultaneously processes two or more data sets with different measurement errors. We examine the role of the noise level of the data in the singular value decomposition inversion process, the criteria for a proper cutoff, and its effect on the uncertainty of the solution. Examples of processed logs using the composite-data processing method are presented and discussed. The possible usefulness of the apparent T1/T2 ratio extracted from the logs is illustrated.

  17. Measurements of Gun Tube Motion and Muzzle Pointing Error of Main Battle Tanks

    OpenAIRE

    McCall, Peter L.

    2001-01-01

    Beginning in 1990, the US Army Aberdeen Test Center (ATC) began testing a prototype cannon mounted in a non-armored turret fitted to an M1A1 Abrams tank chassis. The cannon design incorporated a longer gun tube as a means to increase projectile velocity. A significant increase in projectile impact dispersion was measured early in the test program. Through investigative efforts, the cause of the error was linked to the increased dynamic bending or flexure of the longer tube observed while the ...

  18. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    Science.gov (United States)

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  19. Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement Error

    KAUST Repository

    Sinha, Samiran

    2009-08-10

    We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study.

  20. Accounting for baseline differences and measurement error in the analysis of change over time.

    Science.gov (United States)

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718

  1. Overview of Measuring Effect Sizes: The Effect of Measurement Error. Brief 2

    Science.gov (United States)

    Boyd, Don; Grossman, Pam; Lankford, Hamp; Loeb, Susanna; Wyckoff, Jim

    2008-01-01

    The use of value-added models in education research has expanded rapidly. These models allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. An important question is whether such effects are sufficiently large to achieve various policy goals. Judging whether a change in…

  2. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    Science.gov (United States)

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the variable…

  3. Sieve estimation of constant and time-varying coefficients in nonlinear ordinary differential equation models by considering both numerical error and measurement error

    CERN Document Server

    Xue, Hongqi; Wu, Hulin; 10.1214/09-AOS784

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge--Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the $p$-order numerical algorithm goes to zero at a rate faster than $n^{-1/(p\\wedge4)}$, the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we h...

  4. Reduction of truncation errors in planar, cylindrical, and partial spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Cano-Fácila, Francisco José; Pivnenko, Sergey; Sierra-Castaner, Manuel

    2012-01-01

    A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculatedfar-field pattern up to the whole forward...... hemisphere. The extension of the valid region is achieved by the iterative application of atransformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave......, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method....

  5. Measuring of block error rates in high-speed digital networks

    OpenAIRE

    Petr Ivaniga; Ludovit Mikus

    2006-01-01

    Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – T G.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that these are invariant in terms of time.

  6. Measuring of Block Error Rates in High-Speed Digital Networks

    Directory of Open Access Journals (Sweden)

    Petr Ivaniga

    2006-01-01

    Full Text Available Error characteristics is a decisive factor for the digital networks transmission quality definition. The ITU – TG.826 and G.828 recommendations identify error parameters for high – speed digital networks in relation to G.821 recommendation. The paper describes the relations between individual error parameters and the error rate assuming that theseare invariant in terms of time.

  7. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    OpenAIRE

    Dichev D.; Koev H.; Bakalova T.; Louda P.

    2014-01-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error ...

  8. Research Into the Collimation and Horizontal Axis Errors Influence on the Z+F Laser Scanner Accuracy of Verticality Measurement

    Science.gov (United States)

    Sawicki, J.; Kowalczyk, M.

    2016-06-01

    Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.

  9. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    Science.gov (United States)

    Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.

    2015-12-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of 0.4 ≤ zabs ≤ 2.3 observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of Δα/α = (0.22 ± 0.23) × 10-5, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular, we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of Δα/α measurements, thus unnecessarily reducing the overall precision. We further show that fitting absorption systems with too few velocity components also results in a significant increase in the scatter of Δα/α measurements, and in addition causes Δα/α error estimates to be systematically underestimated. These results thus identify some of the potential pitfalls in analysis techniques and provide a guide for future analyses.

  10. Interp olation by Bivariate Polynomials Based on Multivariate F-truncated Powers

    Institute of Scientific and Technical Information of China (English)

    Yuan Xue-mei

    2014-01-01

    The solvability of the interpolation by bivariate polynomials based on multivariate F-truncated powers is considered in this short note. It unifies the point-wise Lagrange interpolation by bivariate polynomials and the interpolation by bivari-ate polynomials based on linear integrals over segments in some sense.

  11. Accuracy and uncertainty in radiochemical measurements. Learning from errors in nuclear analytical chemistry

    International Nuclear Information System (INIS)

    A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)

  12. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    Science.gov (United States)

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has

  13. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    Directory of Open Access Journals (Sweden)

    S. Houweling

    2013-10-01

    Full Text Available This study investigates the use of total column CH4 (XCH4 retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003–2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27–35 Tg yr−1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr−1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  14. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  15. Measuring errors and violations on the road: a bifactor modeling approach to the Driver Behavior Questionnaire.

    Science.gov (United States)

    Rowe, Richard; Roman, Gabriela D; McKenna, Frank P; Barker, Edward; Poulter, Damian

    2015-01-01

    The Driver Behavior Questionnaire (DBQ) is a self-report measure of driving behavior that has been widely used over more than 20 years. Despite this wealth of evidence a number of questions remain, including understanding the correlation between its violations and errors sub-components, identifying how these components are related to crash involvement, and testing whether a DBQ based on a reduced number of items can be effective. We address these issues using a bifactor modeling approach to data drawn from the UK Cohort II longitudinal study of novice drivers. This dataset provides observations on 12,012 drivers with DBQ data collected at .5, 1, 2 and 3 years after passing their test. A bifactor model, including a general factor onto which all items loaded, and specific factors for ordinary violations, aggressive violations, slips and errors fitted the data better than correlated factors and second-order factor structures. A model based on only 12 items replicated this structure and produced factor scores that were highly correlated with the full model. The ordinary violations and general factor were significant independent predictors of crash involvement at 6 months after starting independent driving. The discussion considers the role of the general and specific factors in crash involvement. PMID:25463951

  16. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main

  17. Estimating Usual Dietary Intake Distributions: Adjusting for Measurement Error and Nonnormality in 24-Hour Food Intake Data

    OpenAIRE

    Nusser, Sarah M; Fuller, Wayne A.; Guenther, Patricia M.

    1995-01-01

    The authors have developed a method for estimating the distribution of an unobservable random variable from data that are subject to considerable measurement error and that arise from a mixture of two populations, one having a single-valued distribution and the other having a continuous unimodal distribution. The method requires that at least two positive intakes be recorded for a subset of the subjects in order to estimate the variance components for the measurement error model. Published in...

  18. Effects of Measurement Errors on Population Estimates from Samples Generated from a Stratified Population through Systematic Sampling Technique

    OpenAIRE

    Abel OUKO; Cheruiyot W. KIPKOECH; Emily KIRIMI

    2014-01-01

    In various surveys, presence of measurement errors has led to misleading results in estimation of various population parameters. This study indicates the effects of measurement errors on estimates of population total and population variance when samples are drawn using systematic sampling technique from a stratified population. A finite population was generated through simulation. The population was then stratified into four strata followed by generation of ten samples in each of them using s...

  19. Analysis of the influence of measuring errors in experimental determinations of the mass and heat transfer coefficients

    International Nuclear Information System (INIS)

    The paper analyses the influence of measuring errors of the operation parameters (flows, temperatures, pressures, and concentrations) in the experimental determination of the mass and heat transfer coefficients. Data obtained on experimental plants for hydrogen isotopes separation, by hydrogen distillation and water distillation, and calculus model for errors propagation are presented. The results are tabulated. The variation intervals of transfer coefficients are marked graphically. The study of the measuring errors is an intermediate stage, extremely important, in experimental determination of criterion relation coefficients, specific relations for B7 structured packing. (authors)

  20. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  1. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048

  2. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  3. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  4. A Reanalysis of Toomela (2003: Spurious measurement error as cause for common variance between personality factors

    Directory of Open Access Journals (Sweden)

    MATTHIAS ZIEGLER

    2009-03-01

    Full Text Available The present article reanalyzed data collected by Toomela (2003. The data contain personality self ratings and cognitive ability test results from n = 912 men with military background. In his original article Toomela showed that in the group with the highest cognitive ability, Big-Five-Neuroticism and -Conscientiousness were substantially correlated and could no longer be clearly separated using exploratory factor analysis. The present reanalysis was based on the hypothesis that a spurious measurement error caused by situational demand was responsible. This means, people distorted their answers. Furthermore it was hypothesized that this situational demand was felt due to a person’s military rank but not due to his intelligence. Using a multigroup structural equation model our hypothesis could be confirmed. Moreover, the results indicate that an uncorrelated trait model might represent personalities better when situational demand is partialized. Practical and theoretical implications are discussed.

  5. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    CERN Document Server

    Whitmore, J B

    2014-01-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10\\,m\\,s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100\\,\\AA) and entire spectrographs arms ($\\sim$1000--3000\\,\\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200\\,m\\,s$^{-1}$\\,per 1000\\,\\AA. We apply a simple model of these distortions to simulated spectra which characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the ...

  6. High Accuracy On-line Measurement Method of Motion Error on Machine Tools Straight-going Parts

    Institute of Scientific and Technical Information of China (English)

    苏恒; 洪迈生; 魏元雷; 李自军

    2003-01-01

    Harmonic suppression, non-periodic and non-closing in straightness profile error that will bring about harmonic component distortion in measurement result are analyzed. The countermeasure-a novel accurate two-probe method in time domain is put forward to measure straight-going component motion error in machine tools based on the frequency domain 3-point method after symmetrical continuation of probes' primitive signal. Both straight-going component motion error in machine tools and the profile error in workpiece that is manufactured on this machine can be measured at the same time. The information is available to diagnose the fault origin of machine tools. The analysis result is proved to be correct by the experiment.

  7. Error Analysis of Three Degree-of-Freedom Changeable Parallel Measuring Mechanism

    Institute of Scientific and Technical Information of China (English)

    CHENG Gang; GE Shi-rong; WANG Yong

    2007-01-01

    A three degree-of-freedom (DOF) planar changeable parallel mechanism is designed by means of control of different drive parameters. This mechanism possesses the characteristics of two kinds of parallel mechanism. Based on its topologic structure, a coordinate system for position analysis is set-up and the forward kinematic solutions are analyzed. It was found that the parallel mechanism is partially decoupled. The relationship between original errors and position-stance error of moving platform is built according to the complete differential-coefficient theory. Then we present a special example with theory values and errors to evaluate the error model, and numerical error solutions are gained.The investigations concentrating on mechanism errors and actuator errors show that the mechanism errors have more influences on the position-stance of the moving platform. It is demonstrated that improving manufacturing and assembly techniques can greatly reduce the moving platform error. The small change in position-stance error in different kinematic positions proves that the error-compensation of software can improve considerably the precision of parallel mechanism.

  8. BIVARIATE LAGRANGE-TYPE VECTOR VALUED RATIONAL INTERPOLANTS

    Institute of Scientific and Technical Information of China (English)

    Chuan-qing Gu; Gong-qing Zhu

    2002-01-01

    An axiomatic definition to bivariate vector valued rational interpolation on distinct plane interpolation points is at first presented in this paper. A two-variable vector valued rational interpolation formula is explicitly constructed in the following form: the determinantal formulas for denominator scalar polynomials and for numerator vector polynomials,which possess Lagrange-type basic function expressions. A practical criterion of existence and uniqueness for interpolation is obtained. In contrast to the underlying method, the method of bivariate Thiele-type vector valued rational interpolation is reviewed.

  9. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199 Hg atom

    Science.gov (United States)

    Chen, Yi; Graner, Brent; Heckel, Blayne; Lindahl, Eric

    2016-05-01

    This talk provides a discussion of the systematic errors that were encountered in the 199 Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199 Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  10. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    1999-05-06

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity to error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.

  11. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  12. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    Science.gov (United States)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  13. THE INVERSE PROBLEM OF OPTIMAL ONESTEP AND MULTI-STEP FILTERING OF MEASUREMENT ERRORS IN THE VECTOR

    Directory of Open Access Journals (Sweden)

    Laipanova Z. M.

    2015-12-01

    Full Text Available In practice, we often encounter the problem of determining a system state based on results of various measurements. Measurements are usually accompanied by random errors; therefore, we should not talk about the definition of the system state but its estimation through stochastic processing of measurement results. In the monograph by E. A. Semenchina and M. Z. Laipanova [1] it was investigated for one-step filtering of the measurement errors of the vector of demand in balance model of Leontiev, as well as multistage optimal filtering of measurement errors of the vector of demand. In this article, we have delivered and investigated the inverse problem for the optimal one-step and multi-step filtering of the measurement errors of the vector of demand. For its solution, the authors propose the method of conditional optimization and using given and known disturbance to determine (estimate the matrix elements for one-step filtering of measurement errors and for multi-stage filtration: for given variables and known disturbance to determine the elements of the matrix. The solution of the inverse problem is reduced to the solution of constrained optimization problems, which is easily determined using in MS Excel. The results of the research have been outlined in this article, they are of considerable interest in applied researches. The article also formulated and the proposed method of solution of inverse in a dynamic Leontiev model

  14. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol

    OpenAIRE

    Raban, Magdalena Z; Scott R Walter; Douglas, Heather E.; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-01-01

    Introduction Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. Methods and analysis The study will be conducte...

  15. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  16. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    International Nuclear Information System (INIS)

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  17. Design, calibration and error analysis of instrumentation for heat transfer measurements in internal combustion engines

    Science.gov (United States)

    Ferguson, C. R.; Tree, D. R.; Dewitt, D. P.; Wahiduzzaman, S. A. H.

    1987-01-01

    The paper reports the methodology and uncertainty analyses of instrumentation for heat transfer measurements in internal combustion engines. Results are presented for determining the local wall heat flux in an internal combustion engine (using a surface thermocouple-type heat flux gage) and the apparent flame-temperature and soot volume fraction path length product in a diesel engine (using two-color pyrometry). It is shown that a surface thermocouple heat transfer gage suitably constructed and calibrated will have an accuracy of 5 to 10 percent. It is also shown that, when applying two-color pyrometry to measure the apparent flame temperature and soot volume fraction-path length, it is important to choose at least one of the two wavelengths to lie in the range of 1.3 to 2.3 micrometers. Carefully calibrated two-color pyrometer can ensure that random errors in the apparent flame temperature and in the soot volume fraction path length will remain small (within about 1 percent and 10-percent, respectively).

  18. Score Tests of Normality in Bivariate Probit Models

    OpenAIRE

    Anthony Murphy

    2005-01-01

    A relatively simple and convenient score test of normality in the bivariate probit model is derived. Monte Carlo simulations show that the small sample performance of the bootstrapped test is quite good. The test may be readily extended to testing normality in related models.

  19. The Statistical Relationship between Bivariate and Multinomial Choice Models

    OpenAIRE

    Weeks, Melvyn; Orne, Chris

    1999-01-01

    The authors demonstrate the conditions under which the bivariate probit model can be considered a special case of the more general multinomial probit model. Since the attendant parameter restrictions produce a singular covariance matrix, the subsequent problems of testing on the boundary of the parameter space are circumvented by the construction of a score test.

  20. A BIVARIATE EXTENSION OF BLEIMANN-BUTZER-HAHN OPERATOR

    Institute of Scientific and Technical Information of China (English)

    Rasul A. Khan

    2002-01-01

    Let C(R2+ ) be a class of continuous functions f on R2+. A bivariate extension L.(f ,x,y) of BleimannButzer-Hahn operator is defined and its standard convergence properties are given. Moreover, a local analogue of Voronovskaja theorem is also given for a subclass of C(R2+).

  1. A vector of quarters representation for bivariate time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1995-01-01

    textabstractIn this paper it is shown that several models for a bivariate nonstationary quarterly time series are nested in a vector autoregression with cointegration restrictions for the eight annual series of quarterly observations. Or, the Granger Representation Theorem is extended to incorporate

  2. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    Science.gov (United States)

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  3. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  4. On limit relations between some families of bivariate hypergeometric orthogonal polynomials

    Science.gov (United States)

    Area, I.; Godoy, E.

    2013-01-01

    In this paper we deal with limit relations between bivariate hypergeometric polynomials. We analyze the limit relation from trinomial distribution to bivariate Gaussian distribution, obtaining the limit transition from the second-order partial difference equation satisfied by bivariate hypergeometric Kravchuk polynomials to the second-order partial differential equation verified by bivariate hypergeometric Hermite polynomials. As a consequence the limit relation between both families of orthogonal polynomials is established. A similar analysis between bivariate Hahn and bivariate Appell orthogonal polynomials is also presented.

  5. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    Directory of Open Access Journals (Sweden)

    Thomas P Eisele

    Full Text Available Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  6. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    CERN Document Server

    Wilczynska, Michael R; King, Julian A; Murphy, Michael T; Bainbridge, Matthew B; Flambaum, Victor V

    2015-01-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of $0.4 \\leq z_{abs} \\leq 2.3$ observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of $\\Delta\\alpha/\\alpha=\\left(0.22\\pm0.23\\right)\\times10^{-5}$, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of $\\Delta\\alpha/\\alpha$ measurements, th...

  7. Statistical Inference for Regression Models with Covariate Measurement Error and Auxiliary Information.

    Science.gov (United States)

    You, Jinhong; Zhou, Haibo

    2009-01-01

    We consider statistical inference on a regression model in which some covariables are measured with errors together with an auxiliary variable. The proposed estimation for the regression coefficients is based on some estimating equations. This new method alleates some drawbacks of previously proposed estimations. This includes the requirment of undersmoothing the regressor functions over the auxiliary variable, the restriction on other covariables which can be observed exactly, among others. The large sample properties of the proposed estimator are established. We further propose a jackknife estimation, which consists of deleting one estimating equation (instead of one obervation) at a time. We show that the jackknife estimator of the regression coefficients and the estimating equations based estimator are asymptotically equivalent. Simulations show that the jackknife estimator has smaller biases when sample size is small or moderate. In addition, the jackknife estimation can also provide a consistent estimator of the asymptotic covariance matrix, which is robust to the heteroscedasticity. We illustrate these methods by applying them to a real data set from marketing science. PMID:22199460

  8. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    Science.gov (United States)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  9. Identification of Error Sources in High Precision Weight Measurements of Gyroscopes

    CERN Document Server

    Lőrincz, I

    2015-01-01

    A number of weight anomalies have been reported in the past with respect to gyroscopes. Much attention was gained from a paper in Physical Review Letters, when Japanese scientists announced that a gyroscope loses weight up to $0.005\\%$ when spinning only in the clockwise rotation with the gyroscope's axis in the vertical direction. Immediately afterwards, a number of other teams tried to replicate the effect, obtaining a null result. It was suggested that the reported effect by the Japanese was probably due to a vibration artifact, however, no final conclusion on the real cause has been obtained. We decided to build a dedicated high precision setup to test weight anomalies of spinning gyroscopes in various configurations. A number of error sources like precession and vibration and the nature of their influence on the measurements have been clearly identified, which led to the conclusive explanation of the conflicting reports. We found no anomaly within $\\Delta m/m<2.6 \\times 10^{-6}$ valid for both horizon...

  10. Local measurement of error field using naturally rotating tearing mode dynamics in EXTRAP T2R

    CERN Document Server

    Sweeney, R M; Brunsell, P; Fridström, R; Volpe, F A

    2016-01-01

    An error field (EF) detection technique using the amplitude modulation of a naturally rotating tearing mode (TM) is developed and validated in the EXTRAP T2R reversed field pinch. The technique was used to identify intrinsic EFs of $m/n = 1/-12$, where $m$ and $n$ are the poloidal and toroidal mode numbers. The effect of the EF and of a resonant magnetic perturbation (RMP) on the TM, in particular on amplitude modulation, is modeled with a first-order solution of the Modified Rutherford Equation. In the experiment, the TM amplitude is measured as a function of the toroidal angle as the TM rotates rapidly in the presence of an unknown EF and a known, deliberately applied RMP. The RMP amplitude is fixed while the toroidal phase is varied from one discharge to the other, completing a full toroidal scan. Using three such scans with different RMP amplitudes, the EF amplitude and phase are inferred from the phases at which the TM amplitude maximizes. The estimated EF amplitude is consistent with other estimates (e....

  11. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  12. Error in interpreting field chlorophyll fluorescence measurements: heat gain from solar radiation

    International Nuclear Information System (INIS)

    Temperature and chlorophyll fluorescence characteristics were determined on leaves of various horticultural species following a dark adaptation period where dark adaptation cuvettes were shielded from or exposed to solar radiation. In one study, temperature of Swietenia mahagoni (L.) Jacq. leaflets within cuvettes increased from approximately 36C to approximately 50C during a 30-minute exposure to solar radiation. Alternatively, when the leaflets and cuvettes were shielded from solar radiation, leaflet temperature declined to 33C in 10 to 15 minutes. In a second study, 16 horticultural species exhibited a lower variable: maximum fluorescence (Fv:Fm) when cuvettes were exposed to solar radiation during the 30-minute dark adaptation than when cuvettes were shielded. In a third study with S. mahagoni, the influence of self-shielding the cuvettes by wrapping them with white tape, white paper, or aluminum foil on temperature and fluorescence was compared to exposing or shielding the entire leaflet and cuvette. All of the shielding methods reduced leaflet temperature and increased the Fv:Fm ratio compared to leaving cuvettes exposed. These results indicate that heat stress from direct exposure to solar radiation is a potential source of error when interpreting chlorophyll fluorescence measurements on intact leaves. Methods for moderating or minimizing radiation interception during dark adaptation are recommended. (author)

  13. Perceived vs. measured effects of advanced cockpit systems on pilot workload and error: are pilots' beliefs misaligned with reality?

    Science.gov (United States)

    Casner, Stephen M

    2009-05-01

    Four types of advanced cockpit systems were tested in an in-flight experiment for their effect on pilot workload and error. Twelve experienced pilots flew conventional cockpit and advanced cockpit versions of the same make and model airplane. In both airplanes, the experimenter dictated selected combinations of cockpit systems for each pilot to use while soliciting subjective workload measures and recording any errors that pilots made. The results indicate that the use of a GPS navigation computer helped reduce workload and errors during some phases of flight but raised them in others. Autopilots helped reduce some aspects of workload in the advanced cockpit airplane but did not appear to reduce workload in the conventional cockpit. Electronic flight and navigation instruments appeared to have no effect on workload or error. Despite this modest showing for advanced cockpit systems, pilots stated an overwhelming preference for using them during all phases of flight.

  14. Comparison of Error Estimations by DERs in One-Port S and SLO Calibrated VNA Measurements and Application

    CERN Document Server

    Yannopoulou, Nikolitsa

    2011-01-01

    In order to demonstrate the usefulness of the only one existing method for systematic error estimations in VNA (Vector Network Analyzer) measurements by using complex DERs (Differential Error Regions), we compare one-port VNA measurements after the two well-known calibration techniques: the quick reflection response, that uses only a single S (Short circuit) standard, and the time-consuming full one-port, that uses a triple of SLO standards (Short circuit, matching Load, Open circuit). For both calibration techniques, the comparison concerns: (a) a 3D geometric representation of the difference between VNA readings and measurements, and (b) a number of presentation figures for the DERs and their polar DEIs (Differential Error Intervals) of the reflection coefficient, as well as, the DERs and their rectangular DEIs of the corresponding input impedance. In this paper, we present the application of this method to an AUT (Antenna Under Test) selected to highlight the existence of practical cases in which the time ...

  15. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    Science.gov (United States)

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects.

  16. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  17. Measuring early or late dependence for bivariate lifetimes of twins

    DEFF Research Database (Denmark)

    Scheike, Thomas; Holst, Klaus K; Hjelmborg, Jacob B

    2015-01-01

    -Oakes model. This model can be extended in several directions. One extension is to allow the dependence parameter to depend on covariates. Another extension is to model dependence via piecewise constant cross-hazard ratio models. We show how both these models can be implemented for large sample data...

  18. Spectral characteristics of time-dependent orbit errors in altimeter height measurements

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1993-01-01

    A mean reference surface and time-dependent orbit errors are estimated simultaneously for each exact-repeat ground track from the first two years of Geosat sea level estimates based on the Goddard Earth model (GEM)-T2 orbits. Motivated by orbit theory and empirical analysis of Geosat data, the time-dependent orbit errors are modeled as 1 cycle per revolution (cpr) sinusoids with slowly varying amplitude and phase. The method recovers the known 'bow tie effect' introduced by the existence of force model errors within the precision orbit determination (POD) procedure used to generate the GEM-T2 orbits. The bow tie pattern of 1-cpr orbit errors is characterized by small amplitudes near the middle and larger amplitudes (up to 160 cm in the 2 yr of data considered here) near the ends of each 5- to 6-day orbit arc over which the POD force model is integrated. A detailed examination of these bow tie patterns reveals the existence of daily modulations of the amplitudes of the 1-cpr sinusoid orbit errors with typical and maximum peak-to-peak ranges of about 14 cm and 30 cm, respectively. The method also identifies a daily variation in the mean orbit error with typical and maximum peak-to-peak ranges of about 6 and 30 cm, respectively, that is unrelated to the predominant 1-cpr orbit error. Application of the simultaneous solution method to the much less accurate Geosat height estimates based on the Naval Astronautics Group orbits concludes that the accuracy of POD is not important for collinear altimetric studies of time-dependent mesoscale variability (wavelengths shorter than 1000 km), as long as the time-dependent orbit errors are dominated by 1-cpr variability and a long-arc (several orbital periods) orbit error estimation scheme such as that presented here is used.

  19. THEOREMS OF PEANO'S TYPE FOR BIVARIATE FUNCTIONS AND OPTIMAL RECOVERY OF LINEAR FUNCTIONALS

    Institute of Scientific and Technical Information of China (English)

    N.K. Dicheva

    2001-01-01

    The best recovery of a linear functional Lf , f =f (x,y), on thebasis of given linear functionals Ljf ,j=1,2, … ,N in a sense of Sard has been investigated, using analogy of Peano's theorem. The best recovery of a bivariate function by given scattered data has been obtained in a simple analytical form as a special case.CLC Number:O17 Document ID:AAuthor Resume:Natasha K. Dicheva ,e-mail: dichevan_fgs@uacg, acad. bg References:[1]Rudin,W. ,Principles of Mathematical Analysis,2ed. ,McGraw-Hill Book Co. ,New York,1964.[2]Rudin,W. ,Real and Complex Analysis,McGraw-Hill publishing Co. ,New York,1976.[3]Hewitt,E. and Stromberg,K. ,Real and Abstract Analysis,Springer-Verlag,New York,Berlin,1965.[4]Lusternik,L. and Sobolev,V. ,Elements of the Functional Analysis,Izd. Nauka,Moskva,1965 (in Russian).[5]Sard,A.,Integral Representation of Remainders Duke Math. J.,15(1948),333-345.[6]Sard,A. ,Linear Approximation,Amer. Math. Soc. ,Math. Surverys,9,1963.[7]Smolyak,S.A. ,On the optimal reconvery of Functions and Functionals of Them,Ph. D. Thesis,Moscow State University,1965.[8]Nielson,G.,Bivariate Spline Functions and the Approximation of Linear Functionals,Numer.Math.,21(1973),138-160.[9]Mansfield,L.E. ,Optimal Approximations and Error Bounds in Spaces of Bivariate Functions,J. Approx. Theory 5(1972),77-96.[10]Mansfield,L.E. ,On the Optimal Approximation of Linear Functionals in Spaces of Bivariate Functions,SIAM J. Numer. Anal ,8(1971),115-126.[11]Ritter,D. ,Two Dimensional Spline Functions and best Approximation of Linear Functionals,J. Approx. Theory,3(1970),352-368.[12]Laurent,P.J. ,Approximation et optimisation,Hermann,Paris,1972.[13]Bojanov,B. ,Hakopian,H.A. and Sahakian,A.A. ,Spline Functions and Multivariate Interpolations,Kluwer Academic Publishers,Dordrecht,1993.[14]Dicheve,N.K.,On the best Recovery of Linear Functional and its Applications,Boundary Elements XXI,eds. C.A. Brebbia and H. Power,WIT Press,Southampton,Boston,(1999),739-747.Manuscript Received

  20. Z-boson-exchange contributions to the luminosity measurements at LEP and c.m.s.-energy-dependent theoretical errors

    International Nuclear Information System (INIS)

    The precision of the calculation of Z-boson-exchange contributions to the luminosity measurements at LEP is studied for both the first and second generation of LEP luminosity detectors. It is shown that the theoretical errors associated with these contributions are sufficiently small so that the high-precision measurements at LEP, based on the second generation of luminosity detectors, are not limited. The same is true for the c.m.s.-energy-dependent theoretical errors of the Z line-shape formulae. (author) 19 refs.; 3 figs.; 7 tabs

  1. Measurement Error Effect on the Power of Control Chart for Doubly Truncated Normal Distribution under Standardization Procedure

    Directory of Open Access Journals (Sweden)

    Ashit B. Chakraborty

    2015-09-01

    Full Text Available Researchers in various quality control procedures consider the possibility of measurement error effect on the power of control charts as an important issue. In this paper the effect of measurement errors on the power curve of standardization procedure will be studied for doubly truncated normal distribution. A method of obtaining the expression of the power of control chart for doubly truncated normal distribution is being proposed. The effect of truncation will be shown accordingly. To study the sensitivity of the monitoring procedure, average run length ( is also considered. 

  2. Evoked potential correlates of intelligence: some problems with Hendrickson's string measure of evoked potential complexity and error theory of intelligence.

    Science.gov (United States)

    Vetterli, C F; Furedy, J J

    1985-07-01

    The string measure of evoked potential (EP) complexity is based on a new error theory of intelligence, which differs from the older speed-based formulations which focus on EP latency rather than complexity. In this note we first raise a methodological problem of arbitrariness with respect to one version of the string measure. We then provide a comparative empirical assessment of EP-IQ correlations with respect to a revised string measure (which does not suffer from the methodological problem), a latency measure, and another measure of EP complexity: average voltage. This assessment indicates that the string measure, in particular, yields quite disorderly results, and that, in general, the results favor the speed over the error formulation.

  3. Array processing——a new method to detect and correct errors on array resistivity logging tool measurements

    Institute of Scientific and Technical Information of China (English)

    Philip D.RABINOWITZ; Zhiqiang ZHOU

    2007-01-01

    In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based onthe physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.

  4. A conditional bivariate reference curve with an application to human growth

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation......conditional bivariate distribution; reference curves; percentile; non-parametric; quantile regression; non-parametric estimation...

  5. An empirical assessment of exposure measurement errors and effect attenuation in bi-pollutant epidemiologic models

    Science.gov (United States)

    Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...

  6. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    OpenAIRE

    Ridder, de, J.; Hakvoort, W.B.J.; van Dijk; Lötters, J.C.; Boer, de, J.W.; Dimitrovova, Z.; Almeida, De

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how external vibrations affect the measurement error. A Coriolis Mass-Flow Meter (CMFM) is an active device based on the Coriolis force principle for direct mass-flow measurements, independent of fluid pr...

  7. A new multivariate measurement error model with zero-inflated dietary data, and its application to dietary assessment

    OpenAIRE

    Zhang, Saijuan; Midthune, Douglas; Guenther, Patricia M.; Krebs-Smith, Susan M; Kipnis, Victor; Dodd, Kevin W; Buckman, Dennis W.; Tooze, Janet A.; Freedman, Laurence; Carroll, Raymond J.

    2011-01-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet represents numerous foods, nutrients and other components, each of which have distinctive attributes. Sometimes, it is useful to examine intake of these components separately, but increasingly nu...

  8. Modeling Elicitation effects in contingent valuation studies: a Monte Carlo Analysis of the bivariate approach

    OpenAIRE

    Genius, Margarita; Strazzera, Elisabetta

    2005-01-01

    A Monte Carlo analysis is conducted to assess the validity of the bivariate modeling approach for detection and correction of different forms of elicitation effects in Double Bound Contingent Valuation data. Alternative univariate and bivariate models are applied to several simulated data sets, each one characterized by a specific elicitation effect, and their performance is assessed using standard selection criteria. The bivariate models include the standard Bivariate Probit model, and an al...

  9. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  10. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    Science.gov (United States)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  11. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  12. A study on fatigue measurement of operators for human error prevention in NPPs

    International Nuclear Information System (INIS)

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and management

  13. Response of residential electricity demand to price: The effect of measurement error

    International Nuclear Information System (INIS)

    In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured. To address these issues, we estimate a dynamic partial adjustment model using the Kiviet corrected Least Square Dummy Variables (LSDV) (1995) and the Blundell-Bond (1998) estimators. We find that the long-term elasticities produced by the Blundell-Bond system GMM methods are largest, and that from the bias-corrected LSDV are greater than that from the conventional LSDV. From an energy policy point of view, the results obtained using the Blundell-Bond estimator where we instrument for price imply that a carbon tax or other price-based policy may be effective in discouraging residential electricity consumption and hence curbing greenhouse gas emissions in an electricity system mainly based on coal and gas power plants. - Research Highlights: → Updated information on price elasticities for the US energy policy. → Taking into account measurement error in the price variable increase price elasticity. → Room for discouraging residential electricity consumption using price increases.

  14. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Directory of Open Access Journals (Sweden)

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  15. Tumor volume measurement error using computed tomography imaging in a phase II clinical trial in lung cancer.

    Science.gov (United States)

    Henschke, Claudia I; Yankelevitz, David F; Yip, Rowena; Archer, Venice; Zahlmann, Gudrun; Krishnan, Karthik; Helba, Brian; Avila, Ricardo

    2016-07-01

    To address the error introduced by computed tomography (CT) scanners when assessing volume and unidimensional measurement of solid tumors, we scanned a precision manufactured pocket phantom simultaneously with patients enrolled in a lung cancer clinical trial. Dedicated software quantified bias and random error in the [Formula: see text], and [Formula: see text] dimensions of a Teflon sphere and also quantified response evaluation criteria in solid tumors and volume measurements using both constant and adaptive thresholding. We found that underestimation bias was essentially the same for [Formula: see text], and [Formula: see text] dimensions using constant thresholding and had similar values for adaptive thresholding. The random error of these length measurements as measured by the standard deviation and coefficient of variation was 0.10 mm (0.65), 0.11 mm (0.71), and 0.59 mm (3.75) for constant thresholding and 0.08 mm (0.51), 0.09 mm (0.56), and 0.58 mm (3.68) for adaptive thresholding, respectively. For random error, however, [Formula: see text] lengths had at least a fivefold higher standard deviation and coefficient of variation than [Formula: see text] and [Formula: see text]. Observed [Formula: see text]-dimension error was especially high for some 8 and 16 slice CT models. Error in CT image formation, in particular, for models with low numbers of detector rows, may be large enough to be misinterpreted as representing either treatment response or disease progression. PMID:27660808

  16. A bivariate ordered probit estimator with mixed effects

    OpenAIRE

    Buscha, Franz; Conte, Anna

    2010-01-01

    In this paper, we discuss the derivation and application of a bivariate ordered probit model with mixed effects. Our approach allows one to estimate the distribution of the effect (gamma) of an endogenous ordered variable on an ordered explanatory variable. By allowing gamma to vary over the population, our estimator offers a more flexible parametric setting to recover the causal effect of an endogenous variable in an ordered choice setting. We use Monte Carlo simulations to examine the perfo...

  17. Estimation of Multivariate Probit Models via Bivariate Probit

    OpenAIRE

    John Mullahy

    2015-01-01

    Models having multivariate probit and related structures arise often in applied health economics. When the outcome dimensions of such models are large, however, estimation can be challenging owing to numerical computation constraints and/or speed. This paper suggests the utility of estimating multivariate probit (MVP) models using a chain of bivariate probit estimators. The proposed approach offers two potential advantages over standard multivariate probit estimation procedures: significant r...

  18. Innovation Performance of Chilean SMEs: A Bivariate Probit Analysis

    OpenAIRE

    Rehman, Naqeeb Ur

    2016-01-01

    The purpose of this paper is to investigate the innovation activities of Chilean firms’ by using micro level data. Previous studies showed research gap related to micro level analysis of the Chilean SMEs. For the first time, multiple proxies have been used as dependent variables (product/process innovations and patent application/spending), which is neglected by the past studies. A micro level data has been obtained from the World Bank, Enterprise Survey on 696 Chilean SMEs. Bivariate probit...

  19. Non-parametric causal inference for bivariate time series

    CERN Document Server

    McCracken, James M

    2015-01-01

    We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.

  20. The Noval Properties and Construction of Multi-scale Matrix-valued Bivariate Wavelet wraps

    Science.gov (United States)

    Zhang, Hai-mo

    In this paper, we introduce matrix-valued multi-resolution structure and matrix-valued bivariate wavelet wraps. A constructive method of semi-orthogonal matrix-valued bivari-ate wavelet wraps is presented. Their properties have been characterized by using time-frequency analysis method, unitary extension principle and operator theory. The direct decom-position relation is obtained.

  1. Measurement errors in tipping bucket rain gauges under different rainfall intensities and their implication to hydrologic models

    Science.gov (United States)

    Measurements from tipping bucket rain gauges (TBRs) consist of systematic and random errors as an effect of external factors, such as mechanical limitations, wind effects, evaporation losses, and rainfall intensity. Two different models of TBRs, viz. ISCO-674 and TR-525 (TexasInstr., Inc.), being us...

  2. 振动测量误差影响因素探析%Vibration Measurement Error Inlfuence Factors Analysis

    Institute of Scientific and Technical Information of China (English)

    鲍耀翔; 陈武

    2016-01-01

    随着科学技术的进步,在具体的操作中,对于振动测量误差的重视度也在逐渐提升,为了满足振动测量的精度要求,避免误差的出现,本文通过振动测量方法对振动测量误差的影响因素以及改进的方式进行分析,希望能够避免误差,提高精度。%Along with the progress of science and technology, in the concrete operation, the vibration stress measurement error are also gradually improve, in order to satisfy the requirement of the vibration measuring precision, avoid the occurrence of error. So, in this article, through the pen name of vibration measurement method, the vibration measurement error analysis of the influencing factors and improve way hoped to avoid error, improve accuracy.

  3. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently p

  4. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2014-01-01

    Full Text Available This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs. The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  5. Noise Removal From Microarray Images Using Maximum a Posteriori Based Bivariate Estimator

    Directory of Open Access Journals (Sweden)

    A.Sharmila Agnal

    2013-01-01

    Full Text Available Microarray Image contains information about thousands of genes in an organism and these images are affected by several types of noises. They affect the circular edges of spots and thus degrade the image quality. Hence noise removal is the first step of cDNA microarray image analysis for obtaining gene expression level and identifying the infected cells. The Dual Tree Complex Wavelet Transform (DT-CWT is preferred for denoising microarray images due to its properties like improved directional selectivity and near shift-invariance. In this paper, bivariate estimators namely Linear Minimum Mean Squared Error (LMMSE and Maximum A Posteriori (MAP derived by applying DT-CWT are used for denoising microarray images. Experimental results show that MAP based denoising method outperforms existing denoising techniques for microarray images.

  6. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    International Nuclear Information System (INIS)

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard deviation of the dynamic position errors of 3 Å. We demonstrate how to experimentally distinguish between different sources of measurement errors, e.g. electrical measurement noise, probe geometry error as well as static and dynamic electrode position errors. (paper)

  7. First measurement and correction of nonlinear errors in the experimental insertions of the CERN Large Hadron Collider

    Science.gov (United States)

    Maclean, E. H.; Tomás, R.; Giovannozzi, M.; Persson, T. H. B.

    2015-12-01

    Nonlinear magnetic errors in low-β insertions can contribute significantly to detuning with amplitude, linear and nonlinear chromaticity, and lead to degradation of dynamic aperture and beam lifetime. As such, the correction of nonlinear errors in the experimental insertions of colliders can be of critical significance for successful operation. This is expected to be of particular relevance to the LHC's second run and its high luminosity upgrade, as well as to future colliders such as the Future Circular Collider. Current correction strategies envisioned for these colliders assume it will be possible to calculate optimized local corrections through the insertions, using a magnetic model of the errors. This paper shows however, that reliance purely upon magnetic measurements of the nonlinear errors of insertion elements is insufficient to guarantee a good correction quality in the relevant low-β* regime. It is possible to perform beam-based examination of nonlinear magnetic errors via the feed-down to readily observed beam properties upon application of closed orbit bumps, and methods based upon feed-down to tune have been utilized at RHIC, SIS18, and SPS. This paper demonstrates the extension of such methodology to include direct observation of feed-down to linear coupling in the LHC. It is further shown that such beam-based studies can be used to complement magnetic measurements performed during LHC construction, in order to validate and refine the magnetic model of the collider. Results from first attempts of the measurement and correction of nonlinear errors in the LHC experimental insertions are presented. Several discrepancies of beam-based studies with respect to the LHC magnetic model are reported.

  8. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects. PMID:25904890

  9. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  10. MEASUREMENT ERROR EFFECT ON THE POWER OF THE CONTROL CHART FOR ZERO-TRUNCATED BINOMIAL DISTRIBUTION UNDER STANDARDIZATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Anwer Khurshid

    2014-12-01

    Full Text Available Measurement error effect on the power of control charts for zero truncated Poisson distribution and ratio of two Poisson distributions are recently studied by Chakraborty and Khurshid (2013a and Chakraborty and Khurshid (2013b respectively. In this paper, in addition to the expression for the power of control chart for ZTBD based on standardized normal variate is obtained, numerical calculations are presented to see the effect of errors on the power curve. To study the sensitivity of the monitoring procedure, average run length (ARL is also considered.

  11. Estimating the U.S. Demand for Sugar in the Presence of Measurement Error in the Data

    OpenAIRE

    Uri, Noel D

    1994-01-01

    Inaccuracy in the measurement of the price data for the substitute sweeteners for sugar is a problem encountered in the estimation of the demand for sugar. Two diagnostics are introduced to assess the effect that this measurement error has on the estimated coefficients of the sugar demand relationship. The regression coefficient bounds diagnostic is used to indicate a range in which the true price responsiveness of consumers to changes in the price of sugar substitutes lies. The bias correcti...

  12. The reconstructed residual error: a novel segmentation evaluation measure for reconstructed images in tomography

    NARCIS (Netherlands)

    Roelandts, T.; Batenburg, K.J.; Dekker, A.J. den; Sijbers, J.

    2014-01-01

    In this paper, we present the reconstructed residual error, which evaluates the quality of a given segmentation of a reconstructed image in tomography. This novel evaluation method, which is independent of the methods that were used to reconstruct and segment the image, is applicable to segmentation

  13. Correction of error in two-dimensional wear measurements of cemented hip arthroplasties

    NARCIS (Netherlands)

    The, Bertram; Mol, Linda; Diercks, Ron L.; van Ooijen, Peter M. A.; Verdonschot, Nico

    2006-01-01

    The irregularity of individual wear patterns of total hip prostheses seen during patient followup may result partially from differences in radiographic projection of the components between radiographs. A method to adjust for this source of error would increase the value of individual wear curves. We

  14. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    Science.gov (United States)

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  15. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  16. Measuring and Correcting Wind-Induced Pointing Errors of the Green Bank Telescope Using an Optical Quadrant Detector

    CERN Document Server

    Ries, Paul; Constantikes, Kim T; Brandt, Joseph J; Ghigo, Frank D; Mason, Brian S; Prestage, Richard M; Ray, Jason; Schwab, Frederic R

    2011-01-01

    Wind-induced pointing errors are a serious concern for large-aperture high-frequency radio telescopes. In this paper, we describe the implementation of an optical quadrant detector instrument that can detect and provide a correction signal for wind-induced pointing errors on the 100m diameter Green Bank Telescope (GBT). The instrument was calibrated using a combination of astronomical measurements and metrology. We find that the main wind-induced pointing errors on time scales of minutes are caused by the feedarm being blown along the direction of the wind vector. We also find that wind-induced structural excitation is virtually non-existent. We have implemented offline software to apply pointing corrections to the data from imaging instruments such as the MUSTANG 3.3 mm bolometer array, which can recover ~70% of sensitivity lost due to wind-induced pointing errors. We have also performed preliminary tests that show great promise for correcting these pointing errors in real-time using the telescope's subrefle...

  17. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin;

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from the product......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  18. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Science.gov (United States)

    Dichev, D.; Koev, H.; Bakalova, T.; Louda, P.

    2014-08-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  19. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Directory of Open Access Journals (Sweden)

    Dichev D.

    2014-08-01

    Full Text Available The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  20. Probabilistic floodplain hazard mapping: managing uncertainty by using a bivariate approach for flood frequency analysis

    Science.gov (United States)

    Candela, Angela; Tito Aronica, Giuseppe

    2014-05-01

    Floods are a global problem and are considered the most frequent natural disaster world-wide. Many studies show that the severity and frequency of floods have increased in recent years and underline the difficulty to separate the effects of natural climatic changes and human influences as land management practices, urbanization etc. Flood risk analysis and assessment is required to provide information on current or future flood hazard and risks in order to accomplish flood risk mitigation, to propose, evaluate and select measures to reduce it. Both components of risk can be mapped individually and are affected by multiple uncertainties as well as the joint estimate of flood risk. Major sources of uncertainty include statistical analysis of extremes events, definition of hydrological input, channel and floodplain topography representation, the choice of effective hydraulic roughness coefficients. The classical procedure to estimate flood discharge for a chosen probability of exceedance is to deal with a rainfall-runoff model associating to risk the same return period of original rainfall, in accordance with the iso-frequency criterion. Alternatively, a flood frequency analysis to a given record of discharge data is applied, but again the same probability is associated to flood discharges and respective risk. Moreover, since flood peaks and corresponding flood volumes are variables of the same phenomenon, they should be, directly, correlated and, consequently, multivariate statistical analyses must be applied. This study presents an innovative approach to obtain flood hazard maps where hydrological input (synthetic flood design event) to a 2D hydraulic model has been defined by generating flood peak discharges and volumes from: a) a classical univariate approach, b) a bivariate statistical analysis, through the use of copulas. The univariate approach considers flood hydrographs generation by an indirect approach (rainfall-runoff transformation using input rainfall

  1. Upper Bounds on the Probability of Error in terms of Mean Divergence Measures

    CERN Document Server

    Taneja, Inder Jeet

    2011-01-01

    In this paper we shall consider some famous means such as arithmetic, harmonic, geometric, root square mean, etc. Considering the difference of these means, we can establish. some inequalities among them. Interestingly, the difference of mean considered is convex functions. Applying some properties, upper bounds on the probability of error are established in this paper. It is also shown that the results obtained are sharper than obtained directly applying known inequalities.

  2. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S;

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined...... in the evaluation model. AMS residual variances were found to be 16 to 37 percent smaller for milk and protein yield and 42 to 47 percent larger for fat yield compared to CMS...

  3. Bias and spread in extreme value theory measurements of probability of error

    Science.gov (United States)

    Smith, J. G.

    1972-01-01

    Extreme value theory is examined to explain the cause of the bias and spread in performance of communications systems characterized by low bit rates and high data reliability requirements, for cases in which underlying noise is Gaussian or perturbed Gaussian. Experimental verification is presented and procedures that minimize these effects are suggested. Even under these conditions, however, extreme value theory test results are not particularly more significant than bit error rate tests.

  4. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    Science.gov (United States)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  5. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  6. Interactive analysis of gappy bivariate time series using AGSS

    OpenAIRE

    Lewis, Peter A.W.; Ray, Bonnie K.

    1992-01-01

    Bivariate time series which display nonstationary behavior, such as cycles or long-term trends, are common in fields such as oceanography and meteorology. These are usually very large-scale data sets and often may contain long gaps of missing values in one or both series, with the gaps perhaps occurring at different time periods in the two series. We present a simplified but effective method of interactively examining and filling in the missing values in such series using extensions of the me...

  7. Study on Laser Visual Measurement Method for Seamless Steel PipeStraightness Error by Multiple Line-structured Laser Sensors

    Institute of Scientific and Technical Information of China (English)

    陈长水; 谢建平; 王佩琳

    2001-01-01

    An original non-contact measurement method using multiple line-structured laser sensors is introduced for seamless steel pipe straightness error is in this paper. An arc appears on the surface of the measured seamless steel pipe against a line-structured laser source. After the image of the arc is accepted by a CCD camera, the coordinates of the center of the pipe cross-section circle containing the arc can be worked out through a certain algorithm. Similarly, multiple line-structured laser sensors are equipped parallel to the pipe. The straightness error of the seamless steel pipe, therefore, can be inferred from the coordinates of multiple cross-section centers obtained from every line-structured laser sernsor .

  8. Effect of the quantum nature of detecting low-intensity radiation on the distance measurement error in pulsed laser ranging

    International Nuclear Information System (INIS)

    The dispersion of estimates of the time position of low-intensity radiation pulses is studied as a function of their duration and detection parameters. Simulations showed that the error of distance measuring to an object in one ranging cycle can be 0.05-0.10 m. The method for obtaining precision estimates of a distance to objects without corner reflectors is proposed. (laser applications and other topics in quantum electronics)

  9. From transmission error measurement to Pulley-Belt slip determination in serpentine belt drives: influence of tensioner and belt characteristics

    OpenAIRE

    Manin, Lionel; Michon, Guilhem; Rémond, Didier; Dufour, Regis

    2007-01-01

    Serpentine belt drives are often used in front end accessory drive of automotive engine. The accessories resistant torques are getting higher within new technological innovations as stater-alternator, and belt transmissions are always asked for higher capacity. Two kind of tensioners are used to maintain minimum tension that insure power transmission and minimize slip: dry friction or hydraulic tensioners. An experimental device and a specific transmission error measurement method have been u...

  10. Prevalence, Impact, and Adjustments of Measurement Error in Retrospective Reports of Unemployment: An Analysis Using Swedish Administrative Data.

    OpenAIRE

    Pina Sanchez, Jose Maria

    2014-01-01

    In this thesis I carry out an encompassing analysis of the problem of measurement error in retrospectively collected work histories using data from the “Longitudinal Study of the Unemployed”. This dataset has the unique feature of linking survey responses to a retrospective question on work status to administrative data from the Swedish Register of Unemployment. Under the assumption that the register data is a gold standard I explore three research questions: i) what is the prevalence of and ...

  11. Reduction of truncation errors in planar near-field aperture antenna measurements using the method of alternating orthogonal projections

    DEFF Research Database (Denmark)

    Martini, Enrica; Breinbjerg, Olav; Maci, Stefano

    2006-01-01

    A simple and effective procedure for the reduction of truncation error in planar near-field to far-field transformations is presented. The starting point is the consideration that the actual scan plane truncation implies a reliability of the reconstructed plane wave spectrum of the field radiated...... to define a convergent iterative process which is also stable against moderately noisy data. Far-field patterns reconstructed from both simulated and measured near-field data demonstrate the effectiveness of the proposed procedure....

  12. Mixed modes and measurement error: using cognitive interviewing to explore the results of a mixed modes experiment

    OpenAIRE

    Campanelli, Pamela C.; Blake, Margaret; Mackie, Michelle; Hope, Steven

    2015-01-01

    This paper explores the use of cognitive interviewing as a pre-planned follow-up to a quantitative mixed modes experiment. It describes both the quantitative and cognitive interview phases and results. The goal for both was to explore measurement error differences between (computer-assisted personal interviewing - CAPI, computer-assisted telephone interviewing - CATI and computer-assisted web interviewing - CAWI). The cognitive interviewing produced evidence that in particular circumstances, ...

  13. Mixed modes and measurement error: Using cognitive interviewing to explore the results of a mixed modes experiment

    OpenAIRE

    Campanelli, Pamela; Blake, Margaret; Mackie, Michelle; Hope, Steven

    2015-01-01

    This paper explores the use of cognitive interviewing as a pre-planned follow-up to a quantitative mixed modes experiment. It describes both the quantitative and cognitive interview phases and results. The goal for both was to explore measurement error differences between (computer-assisted personal interviewing - CAPI, computer-assisted telephone interviewing - CATI and computer-assisted web interviewing - CAWI). The cognitive interviewing produced evidence that in particular circumstances, ...

  14. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects

    Directory of Open Access Journals (Sweden)

    Karyn Heavner

    2015-08-01

    Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  15. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    OpenAIRE

    Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.

    2015-01-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of $0.4 \\leq z_{abs} \\leq 2.3$ observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of $\\Delta\\alpha/\\alpha=\\left(0.22\\pm0.23\\right)\\times10^{-5...

  16. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  17. Error analysis of Abel retrieved electron density profiles from radio occultation measurements

    Directory of Open Access Journals (Sweden)

    X. Yue

    2010-01-01

    Full Text Available This letter reports for the first time the simulated error distribution of radio occultation (RO electron density profiles (EDPs from the Abel inversion in a systematic way. Occultation events observed by the COSMIC satellites are simulated during the spring equinox of 2008 by calculating the integrated total electron content (TEC along the COSMIC occultation paths with the "true" electron density from an empirical model. The retrieval errors are computed by comparing the retrieved EDPs with the "true" EDPs. The results show that the retrieved NmF2 and hmF2 are generally in good agreement with the true values, but the reliability of the retrieved electron density degrades in low latitude regions and at low altitudes. Specifically, the Abel retrieval method overestimates electron density to the north and south of the crests of the equatorial ionization anomaly (EIA, and introduces artificial plasma caves underneath the EIA crests. At lower altitudes (E- and F1-regions, it results in three pseudo peaks in daytime electron densities along the magnetic latitude and a pseudo trough in nighttime equatorial electron densities.

  18. On Measurement of Efficiency of Cobb-Douglas Production Function with Additive and Multiplicative Errors

    Directory of Open Access Journals (Sweden)

    Md. Moyazzem Hossain

    2015-02-01

    Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.

  19. Household Poverty Dynamics in Malawi: A Bivariate Probit Analysis

    Science.gov (United States)

    Kenala Bokosi, Fanwell

    The aim of this study is to identify the sources of expenditure and poverty dynamics among Malawian households between 1998 and 2002 and to model poverty transitions in Malawi using a bivariate probit model with endogenous selection to address the initial conditions' problem. The exogeneity of the initial state is strongly rejected and could result in considerable overstatement of the effects of the explanatory factors. The results of the bivariate probit model do indicate that education of the household head, per capita acreage cultivated and changes in household size are significantly related to the probability of being poor in 2002 irrespective of the poverty status in 1998. For those households who were poor in 1998, the probability of being poor in 2002 was significantly influenced by household size, value of livestock owned and mean time to services, while residence in the Northern region was a significant variable in determining the probability of being poor in 2002 for households that were not poor in 1998.

  20. Multiple imputation methods for bivariate outcomes in cluster randomised trials.

    Science.gov (United States)

    DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R

    2016-09-10

    Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26990655

  1. Hand-held dynamometry in patients with haematological malignancies: Measurement error in the clinical assessment of knee extension strength

    Directory of Open Access Journals (Sweden)

    Uebelhart Daniel

    2009-03-01

    Full Text Available Abstract Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD. Methods Two observers performed maximum voluntary peak torque measurements (MVPT for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI, the standard error of measurement (SEM, the smallest detectable difference (SDD, the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97 and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94. The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95 and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90. The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM and 9.83 Nm (5.88% of GM. For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM and 11.41 Nm (6.73% of GM. The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM to 27.26 Nm (17.09% of GM for intra-observer measurements, and 26.76 Nm (16.77% of GM to 31.62 Nm (18.66% of GM for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability

  2. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    Science.gov (United States)

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline. PMID:24400941

  3. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  4. A semiparametric approach to simultaneous covariance estimation for bivariate sparse longitudinal data.

    Science.gov (United States)

    Das, Kiranmoy; Daniels, Michael J

    2014-03-01

    Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium, and low) at baseline.

  5. Patient-specific errors in agreement between International Normalized Ratios measured by a whole blood coagulometer and by a routine plasma-based method.

    Science.gov (United States)

    Attermann, Jørn; Andersen, Niels T; Korsgaard, Helle; Maegaard, Marianne; Hasenkam, J Michael

    2004-04-01

    We applied a new statistical method to improve comparisons between systems measuring prothrombin time (PT) by splitting disagreement into systematic errors, which can be eliminated, and random errors, which can not. We found that the disagreement between International Normalized Ratio (INR) measurements based on plasma and whole blood was significantly patient-dependent.

  6. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    Science.gov (United States)

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.

  7. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis

    Directory of Open Access Journals (Sweden)

    Matthew S Tenan

    2016-05-01

    Full Text Available Indirect calorimetry and oxygen consumption (VO2 are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface. This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.

  8. A Statistical Method and Tool to Account for Indirect Calorimetry Differential Measurement Error in a Single-Subject Analysis.

    Science.gov (United States)

    Tenan, Matthew S

    2016-01-01

    Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device. PMID:27242546

  9. Error analysis of subpixel lava temperature measurements using infrared remotely sensed data

    Science.gov (United States)

    Lombardo, V.; Musacchio, M.; Buongiorno, M. F.

    2012-10-01

    When remote sensing users are asked to define their requirements for a new sensor, the big question that always arises is: will the technical specifications meet the scientific requirements? Herein, we discuss quantitative relationships between instrumental spectral and radiometric characteristics and data exploitable for lava flow subpixel temperature analysis. This study was funded within the framework of ESA activities for the IR GMES (Global Monitoring for Environment and Security) element mission requirements in 2005. Subpixel temperature retrieval from satellite infrared data is a well-established method that is well documented in the remote sensing literature. However there is little attention paid to the error analysis on estimated parameters due to atmospheric correction and radiometric accuracy of the sensor. In this study, we suggest the best spectral bands combination to estimate subpixel temperature parameters. We also demonstrate that poor atmospheric corrections may vanish the effectiveness of the most radiometrically accurate instrument.

  10. A new method for identifying bivariate differential expression in high dimensional microarray data using quadratic discriminant analysis

    Directory of Open Access Journals (Sweden)

    Arevalillo Jorge M

    2011-11-01

    Full Text Available Abstract Background One of the drawbacks we face up when analyzing gene to phenotype associations in genomic data is the ugly performance of the designed classifier due to the small sample-high dimensional data structures (n ≪ p at hand. This is known as the peaking phenomenon, a common situation in the analysis of gene expression data. Highly predictive bivariate gene interactions whose marginals are useless for discrimination are also affected by such phenomenon, so they are commonly discarded by state of the art sequential search algorithms. Such patterns are known as weak/marginal strong bivariate interactions. This paper addresses the problem of uncovering them in high dimensional settings. Results We propose a new approach which uses the quadratic discriminant analysis (QDA as a search engine in order to detect such signals. The choice of QDA is justified by a simulation study for a benchmark of classifiers which reveals its appealing properties. The procedure rests on an exhaustive search which explores the feature space in a blockwise manner by dividing it in blocks and by assessing the accuracy of the QDA for the predictors within each pair of blocks; the block size is determined by the resistance of the QDA to peaking. This search highlights chunks of features which are expected to contain the type of subtle interactions we are concerned with; a closer look at this smaller subset of features by means of an exhaustive search guided by the QDA error rate for all the pairwise input combinations within this subset will enable their final detection. The proposed method is applied both to synthetic data and to a public domain microarray data. When applied to gene expression data, it leads to pairs of genes which are not univariate differentially expressed but exhibit subtle patterns of bivariate differential expression. Conclusions We have proposed a novel approach for identifying weak marginal/strong bivariate interactions. Unlike

  11. #2 - An Empirical Assessment of Exposure Measurement Error and Effect Attenuation in Bi-Pollutant Epidemiologic Models

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...

  12. THE INSTABILITY DEGREE IN THE DIEMNSION OF SPACES OF BIVARIATE SPLINE

    Institute of Scientific and Technical Information of China (English)

    Zhiqiang Xu; Renhong Wang

    2002-01-01

    In this paper, the dimension of the spaces of bivariate spline with degree less that 2r and smoothness order r on the Morgan-Scott triangulation is considered. The concept of the instability degree in the dimension of spaces of bivariate spline is presented. The results in the paper make us conjecture the instability degree in the dimension of spaces of bivariate spline is infinity.

  13. Bivariate gamma-geometric law and its induced L\\'evy process

    OpenAIRE

    Barreto-Souza, Wagner

    2013-01-01

    In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005). We refer to this new distribution as bivariate gamma-geometric (BGG) law. A bivariate random vector $(X,N)$ follows BGG law if $N$ has geometric distribution and $X$ may be represented (in law) as a sum of $N$ independent and identically distributed gamma variables, where these variables are independent of $N$. Statistical properties such as moment generat...

  14. Measurement of Hubble constant: non-Gaussian errors in HST Key Project data

    Science.gov (United States)

    Singh, Meghendra; Gupta, Shashikant; Pandey, Ashwini; Sharma, Satendra

    2016-08-01

    Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.

  15. Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA and Spectral Discrimination Measurements

    Directory of Open Access Journals (Sweden)

    Ahram Song

    2015-01-01

    Full Text Available Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE images obtained from Iterative Error Analysis (IEA and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA and Compact Airborne Spectrographic Imager (CASI data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials.

  16. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  17. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  18. Computing the Distance between Piecewise-Linear Bivariate Functions

    CERN Document Server

    Moroz, Guillaume

    2011-01-01

    We consider the problem of computing the distance between two piecewise-linear bivariate functions $f$ and $g$ defined over a common domain $M$. We focus on the distance induced by the $L_2$-norm, that is $\\|f-g\\|_2=\\sqrt{\\iint_M (f-g)^2}$. If $f$ is defined by linear interpolation over a triangulation of $M$ with $n$ triangles, while $g$ is defined over another such triangulation, the obvious na\\"ive algorithm requires $\\Theta(n^2)$ arithmetic operations to compute this distance. We show that it is possible to compute it in $\\O(n\\log^4 n)$ arithmetic operations, by reducing the problem to multi-point evaluation of a certain type of polynomials. We also present an application to terrain matching.

  19. The bivariate combined model for spatial data analysis.

    Science.gov (United States)

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Faes, Christel

    2016-08-15

    To describe the spatial distribution of diseases, a number of methods have been proposed to model relative risks within areas. Most models use Bayesian hierarchical methods, in which one models both spatially structured and unstructured extra-Poisson variance present in the data. For modelling a single disease, the conditional autoregressive (CAR) convolution model has been very popular. More recently, a combined model was proposed that 'combines' ideas from the CAR convolution model and the well-known Poisson-gamma model. The combined model was shown to be a good alternative to the CAR convolution model when there was a large amount of uncorrelated extra-variance in the data. Less solutions exist for modelling two diseases simultaneously or modelling a disease in two sub-populations simultaneously. Furthermore, existing models are typically based on the CAR convolution model. In this paper, a bivariate version of the combined model is proposed in which the unstructured heterogeneity term is split up into terms that are shared and terms that are specific to the disease or subpopulation, while spatial dependency is introduced via a univariate or multivariate Markov random field. The proposed method is illustrated by analysis of disease data in Georgia (USA) and Limburg (Belgium) and in a simulation study. We conclude that the bivariate combined model constitutes an interesting model when two diseases are possibly correlated. As the choice of the preferred model differs between data sets, we suggest to use the new and existing modelling approaches together and to choose the best model via goodness-of-fit statistics. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26928309

  20. Seasonal performance of air conditioners - an analysis of the DOE test procedures: the thermostat and measurement errors. Report No. 2

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, G.D.; Tree, D.R.

    1981-01-01

    Two aspects of the DOE test procedures are analyzed. First, the role of the thermostat in controlling the cycling of conditioning equipment is investigated. The test procedures call for a cycling scheme of 6 minutes on, 24 minutes off for Test D. To justify this cycling scheme as being representative of cycling in the field, it is assumed that the thermostat is the major factor in controlling the cycle rate. This assumption is examined by studying a closed-loop feedback model consisting of a thermostat, a heating/cooling plant and a conditioned space. Important parameters of this model are individually studied to determine their influence on the system. It is found that the switch differential and the anticipator gain are the major parameters in controlling the cycle rate. This confirms the thermostat's dominant role in the cycling of a system. The second aspect of the test procedures concerns transient errors or differences in the measurement of cyclic capacity. In particular, errors due to thermocouple response, thermocouple grid placement, dampers and nonuniform velocity and temperature distributions are considered. Problems in these four areas are mathematically modeled and the basic assumptions are stated. Results from these models help to clarify the problem areas and give an indication of the magnitude of the errors involved. It is found that major disagreement in measured capacity can arise in these four areas and can be mainly attributed to test set-up differences even though such differences are allowable in the test procedures. An understanding of such differences will aid in minimizing many problems in the measurement of cyclic capacity.

  1. Measurement of the screw-home motion of the knee is sensitive to errors in axis alignment.

    Science.gov (United States)

    Piazza, S J; Cavanagh, P R

    2000-08-01

    Measurements of joint angles during motion analysis are subject to error caused by kinematic crosstalk, that is, one joint rotation (e. g., flexion) being interpreted as another (e.g., abduction). Kinematic crosstalk results from the chosen joint coordinate system being misaligned with the axes about which rotations are assumed to occur. The aim of this paper is to demonstrate that measurement of the so-called "screw-home" motion of the human knee, in which axial rotation and extension are coupled, is especially prone to errors due to crosstalk. The motions of two different two-segment mechanical linkages were examined to study the effects of crosstalk. The segments of the first linkage (NSH) were connected by a revolute joint, but the second linkage (SH) incorporated gearing that caused 15 degrees of screw-home rotation to occur with 90 degrees knee flexion. It was found that rotating the flexion axis (inducing crosstalk) could make linkage NSH appear to exhibit a screw-home motion and that a different rotation of the flexion axis could make linkage SH apparently exhibit pure flexion. These findings suggest that the measurement of screw-home rotation may be strongly influenced by errors in the location of the flexion axis. The magnitudes of these displacements of the flexion axis were consistent with the inter-observer variability seen when five experienced observers defined the flexion axis by palpating the medial and lateral femoral epicondyles. Care should be taken when interpreting small internal-external rotations and abduction-adduction angles to ensure that they are not the products of kinematic crosstalk.

  2. Joint Modeling of Failure Time Data with Transformation Model and Longitudinal Data When Covariates are Measured with Errors

    Institute of Scientific and Technical Information of China (English)

    Xi-ming CHENG; Qi GONG

    2012-01-01

    Semiparametric transformation models provide a class of flexible models for regression analysis of failure time data. Several authors have discussed them under different situations when covariates are timeindependent (Chen et al.,2002; Cheng et al.,1995; Fine et al.,1998).In this paper,we consider fitting these models to right-censored data when covariates are time-dependent longitudinal variables and,furthermore,may suffer measurement errors. For estimation,we investigate the maximum likelihood approach,and an EM algorithm is developed. Simulation results show that the proposed method is appropriate for practical application,and an illustrative example is provided.

  3. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  4. Error analysis of linear optics measurements via turn-by-turn beam position data in circular accelerators

    CERN Document Server

    Franchi, Andrea

    2016-01-01

    Many advanced techniques have been developed, tested and implemented in the last decades in almost all circular accelerators across the world to measure the linear optics. However, the greater availability and accuracy of beam diagnostics and the ever better correction of linear magnetic lattice imperfections (beta beating at 1% level and coupling at 0.1%) are reaching what seems to be the intrinsic accuracy and precision of different measurement techniques. This paper aims to highlight and quantify, when possible, the limitations of one standard method, the harmonic analysis of turn-by-turn beam position data. To this end, new analytic formulas for the evaluation of lattice parameters modified by focusing errors are derived. The unexpected conclusion of this study is that for the ESRF storage ring (and possibly for any third generation light source operating at ultra-low coupling and with similar diagnostics), measurement and correction of linear optics via orbit beam position data are to be preferred to the...

  5. Signal optimization, noise reduction, and systematic error compensation methods in long-path DOAS measurements

    Science.gov (United States)

    Simeone, Emilio; Donati, Alessandro

    1998-12-01

    The increment of the exploitable optical path represents one of the most important efforts in the differential optical absorption spectroscopy (DOAS) instruments improvement. The methods that allow long path measurements in the UV region are presented and discussed in this paper. These methods have been experimented in the new Italian DOAS instrument - SPOT - developed and manufactured by Kayser Italia. The system was equipped with a tele-controlled optical shuttle on the light source unit, allowing background radiation measurement. Wavelength absolute calibration of spectra by means of a collimated UV beam from a mercury lamp integrated in the telescope has been exploited. Besides, possible thermal effects on the dispersion coefficients of the holographic grating have been automatically compensated by means of a general non-linear fit during the spectral analysis session. Measurements in bistatic configuration have been performed in urban areas at 1300 m and 2200 m in three spectral windows from 245 to 380 nm. Measurements with these features are expected in the other spectral windows on path lengths ranging from about 5 to 10 km in urban areas. The DOAS technique can be used in field for very fast measurements in the 245-275 nm spectral range, on path lengths up to about 2500 m.

  6. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Directory of Open Access Journals (Sweden)

    Jose Javier Gorgoso-Varela

    2016-04-01

    Full Text Available Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights.Area of study: North-West of Spain.Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill. stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution.Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic.Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass.

  7. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    NARCIS (Netherlands)

    Houweling, S.; Krol, M.C.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E.J.; Morino, I.

    2014-01-01

    This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate sh

  8. From measurements errors to a new strain gauge design for composite materials

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Salviato, Marco; Gili, Jacopo

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff method...

  9. ADC non-linear error corrections for low-noise temperature measurements in the LISA band

    Energy Technology Data Exchange (ETDEWEB)

    Sanjuan, J; Lobo, A; Mateos, N [Institut de Ciencies de l' Espai, CSIC, Fac. de Ciencies, Torre C5, 08193 Bellaterra (Spain); Ramos-Castro, J [Dep. Eng. Electronica, UPC, Campus Nord, Ed. C4, J Girona 1-3, 08034 Barcelona (Spain); DIaz-Aguilo, M, E-mail: sanjuan@ieec.fcr.e [Dep. Fisica Aplicada, UPC, Campus Nord, Ed. B4/B5, J Girona 1-3, 08034 Barcelona (Spain)

    2010-05-01

    Temperature fluctuations degrade the performance of different subsystems in the LISA mission. For instance, they can exert stray forces on the test masses and thus hamper the required drag-free accuracy. Also, the interferometric system performance depends on the stability of the temperature in the optical elements. Therefore, monitoring the temperature in specific points of the LISA subsystems is required. These measurements will be useful to identify the sources of excess noise caused by temperature fluctuations. The required temperature stability is still to be defined, but a figure around 10{mu}K Hz{sup -1/2} from 0.1 mHz to 0.1 Hz can be a good rough guess. The temperature measurement subsystem on board the LISA Pathfinder mission exhibits noise levels of 10{mu}K Hz{sup -1/2} for f >0.1 mHz. For LISA, based on the above hypothesis, the measurement system should overcome limitations related to the analog-to-digital conversion stage which degrades the performance of the measurement when temperature drifts. Investigations on the mitigation of such noise will be here presented.

  10. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...

  11. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...

  12. Reducing the error in terrestrial laser scanning by optimizing the measurement set-up

    NARCIS (Netherlands)

    Soudarissanane, S.S.; Lindenbergh, R.C.; Gorte, B.G.H.

    2008-01-01

    High spatial resolution and fast capturing possibilities make 3D terrestrial laser scanners widely used in engineering applications and cultural heritage recording. Phase based laser scanners can measure distances to object surfaces with a precision in the order of a few millimeters at ranges betwee

  13. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    For pt.I see ibid., vol.3, no.3, p.523-8 (1986). The authors use the theoretical results presented in part I to correct turbulence parameters derived from monostatic sodar wind measurements in an attempt to improve the statistical comparisons with the sonic anemometers on the Boulder Atmospheric ...

  14. Evaluating EIV, OLS, and SEM Estimators of Group Slope Differences in the Presence of Measurement Error: The Single-Indicator Case

    Science.gov (United States)

    Culpepper, Steven Andrew

    2012-01-01

    Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…

  15. Detailed calculation of spectral noise caused by measurement errors of Mach-Zehnder interferometer optical path phases in a spatial heterodyne spectrometer with a phase shift scheme.

    Science.gov (United States)

    Takada, Kazumasa; Seino, Mitsuyoshi; Chiba, Akito; Okamoto, Katsunari

    2013-04-20

    We calculate the root mean square (rms) value of the spectral noise caused by optical path phase measurement errors in a spatial heterodyne spectrometer (SHS) featuring a complex Fourier transformation. In our calculation the deviated phases of each Mach-Zehnder interferometer in the in-phase and quadrature states are treated as statistically independent random variables. We show that the rms value is proportional to the rms error of the phase measurement and that the proportionality coefficient is given analytically. The relationship enables us to estimate the potential performance of the SHS such as the sidelobe suppression ratio for a given measurement error. PMID:23669661

  16. Effect of sagittal plane positioning errors on measurement of the angle of inclination in dogs

    International Nuclear Information System (INIS)

    Angles of inclination were calculated from ventrodorsal (VD) and caudocranial horizontal beam (CaCrHB) radiographs of 17 anesthetized dogs, and from radiographs of left femurs of the same dogs positioned 0 degree, 10 degrees, 15 degrees, and 20 degrees from the cassette in the sagittal plane. Angles of inclination also were measured directly from radiographs of the bones rotated to correct for anteversion. Calculated angles of inclination from the bones at 10 degrees, 15 degrees, and 20 degrees from the cassette were significantly different from the 0 degree values obtained by calculation and direct measurement. Inclination angles from live dogs were consistently larger than those from 0 degree bones. Differences between angles of inclination calculated from VD and CaCrHB radiographs of live dogs were not significant

  17. Aerodynamical errors on tower mounted wind speed measurements due to the presence of the tower

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, H. [Uppsala Univ. (Sweden). Dept. of Meteorology; Dahlberg, J.Aa. [Aeronautical Research Inst. of Sweden, Bromma (Sweden)

    1996-12-01

    Field measurements of wind speed from two lattice towers showed large differences for wind directions where the anemometers of both towers should be unaffected by any upstream obstacle. The wind speed was measured by cup anemometers mounted on booms along the side of the tower. A simple wind tunnel test indicates that the boom, for the studied conditions, could cause minor flow disturbances. A theoretical study, by means of simple 2D flow modelling of the flow around the mast, demonstrates that the tower itself could cause large wind flow disturbances. A theoretical study, based on simple treatment of the physics of motion of a cup anemometer, demonstrates that a cup anemometer is sensitive to velocity gradients across the cups and responds clearly to velocity gradients in the vicinity of the tower. Comparison of the results from the theoretical study and field tests show promising agreement. 2 refs, 8 figs

  18. [Discussion of errors and measuring strategies in morphometry using analysis of variance].

    Science.gov (United States)

    Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U

    1986-01-01

    Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811

  19. From measurements errors to a new strain gauge design for composite materials

    OpenAIRE

    Mikkelsen, Lars Pilgaard; Salviato, Marco; Gili, Jacopo

    2015-01-01

    Significant over-prediction of the material stiffness in the order of 1-10% for polymer based composites has been experimentally observed and numerical determined when using strain gauges for strain measurements instead of non-contact methods such as digital image correlation or less stiff methods such as clip-on extensometers. In the present work, this has been quantified through a numerical study for three different strain gauges. In addition, a significant effect of a thin polymer coating ...

  20. The Magnitude of Errors Associated in Measuring the Loads Applied on an Assistive Device While Walking

    OpenAIRE

    Karimi, Mohammad Taghi; Jamshidi, Nima

    2012-01-01

    Measurement of the loads exerted on the limb is a fundamental part of designing of an assistive device, which has been done by using strain gauges or a transducer. Although calculation of loads applied on an orthosis coefficients achieved from calibration is a standard way, most of researchers determined the loads based on available equations. Therefore, the aim of this research is finding the accuracy of this method with respect to calibration. Some strain gauges were attached on the lateral...

  1. Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements

    OpenAIRE

    Kappel, David; Haus, Rainer; Arnold, Gabriele

    2015-01-01

    Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0-5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency wi...

  2. Sensitivity of SWOT discharge algorithm to measurement errors: Testing on the Sacramento River

    Science.gov (United States)

    Durand, Micheal; Andreadis, Konstantinos; Yoon, Yeosang; Rodriguez, Ernesto

    2013-04-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on the sensitivity of the algorithm accuracy to the uncertainty in AirSWOT measurements of height, width, and slope.

  3. Correcting Positional Errors in Shore-Based Theodolite Measurements of Animals at Sea

    Directory of Open Access Journals (Sweden)

    Ophélie Sagnol

    2014-01-01

    Full Text Available Determining the position of animals at sea can be particularly difficult and yet, accurate range and position of animals at sea are essential to answer a wide range of biological questions. Shore-based theodolite techniques have been used in a number of studies to examine marine mammal movement patterns and habitat use, offering reliable position measurements. In this study we explored the accuracy of theodolite measurements by comparing positional information of the same objects using two independent techniques: a shore-based theodolite station and an onboard GPS over a range of 25 km from the shore-based station. The technique was developed to study the habitat use of sperm whales (Physeter macrocephalus off Kaikoura, New Zealand. We observed that the position accuracy fell rapidly with an increase in range from the shore-based station. Results showed that the horizontal angle was accurately determined, but this was not the case for the vertical angle. We calibrated the position of objects at sea with a regression-based correction to fit the difference in distance between simultaneously recorded theodolite fixes and GPS positions. This approach revealed the necessity to calibrate theodolite measurements with objects at sea of known position.

  4. A Practical Comparison of the Bivariate Probit and Linear IV Estimators

    OpenAIRE

    Chiburis, Richard C.; Das, Jishnu; Lokshin, Michael

    2011-01-01

    This paper presents asymptotic theory and Monte-Carlo simulations comparing maximum-likelihood bivariate probit and linear instrumental variables estimators of treatment effects in models with a binary endogenous treatment and binary outcome. The three main contributions of the paper are (a) clarifying the relationship between the Average Treatment Effect obtained in the bivariate probit m...

  5. Behavior of bivariate interpolation operators at points of discontinuity of the first kind

    CERN Document Server

    Campiti, Michele; Tacelli, Cristian

    2011-01-01

    We introduce an index of convergence for double sequences of real numbers. This index is used to describe the behaviour of some bivariate interpolation sequences at points of discontinuity of the first kind. We consider in particular the case of bivariate Lagrange and Shepard operators.

  6. On the construction of bivariate linear exponential distribution with FGM family

    OpenAIRE

    El-Damcese, M. A.; Ramadan, Dina. A.

    2015-01-01

    In this paper we propose a Farlie-Gumbel-Morgenstern (FGM) family of bivariate linear exponential distributions generated from given marginal's. Therefore, properties of FGM are analogous to properties of bivariate distributions. We study some important statistical properties and results for the new distribution.

  7. Predicting the Size of Sunspot Cycle 24 on the Basis of Single- and Bi-Variate Geomagnetic Precursor Methods

    Science.gov (United States)

    Wilson, Robert M.; Hathaway, David H.

    2009-01-01

    Examined are single- and bi-variate geomagnetic precursors for predicting the maximum amplitude (RM) of a sunspot cycle several years in advance. The best single-variate fit is one based on the average of the ap index 36 mo prior to cycle minimum occurrence (E(Rm)), having a coefficient of correlation (r) equal to 0.97 and a standard error of estimate (se) equal to 9.3. Presuming cycle 24 not to be a statistical outlier and its minimum in March 2008, the fit suggests cycle 24 s RM to be about 69 +/- 20 (the 90% prediction interval). The weighted mean prediction of 11 statistically important single-variate fits is 116 +/- 34. The best bi-variate fit is one based on the maximum and minimum values of the 12-mma of the ap index; i.e., APM# and APm*, where # means the value post-E(RM) for the preceding cycle and * means the value in the vicinity of cycle minimum, having r = 0.98 and se = 8.2. It predicts cycle 24 s RM to be about 92 +/- 27. The weighted mean prediction of 22 statistically important bi-variate fits is 112 32. Thus, cycle 24's RM is expected to lie somewhere within the range of about 82 to 144. Also examined are the late-cycle 23 behaviors of geomagnetic indices and solar wind velocity in comparison to the mean behaviors of cycles 2023 and the geomagnetic indices of cycle 14 (RM = 64.2), the weakest sunspot cycle of the modern era.

  8. The focus-to-detector distance as a source of systematical errors in the measurement of Chaoul therapy units.

    Science.gov (United States)

    Zaránd, P

    1980-09-01

    The skin exposure rates measured on 22 Chaoul units in two consecutive years were compared and their variance was analysed. The statistical fluctuation of the ionization method was 3.1% by a factor of about 2 to 2.5 smaller than the variations due to lack of reproducibility of the Chaoul units. The authors observed systematical errors among exposure rate measurement performed at different focus-to-detector distances. The effective source-to-detector distance is different for various ionization chambers. It is the sum of nominal focus-to-detector distance plus a geometrical constant. The geometrical constant is for a particular chamber only to a small extent dependent on the front wall thickness and on the focus-to detector distance. Sufficient standardization of both calibration procedure and construction of ionization chambers may help in avoiding this effect.

  9. Focus-to-detector distance as a source of systematical errors in the measurement of Chaoul therapy units

    Energy Technology Data Exchange (ETDEWEB)

    Zarand, P. (Orszagos Roentgen es Sugarfizikai Intezet, Budapest (Hungary); Foevarosi Onkoradiologiai Intezet Weil Emil Korhaz, Budapest (Hungary). Municipal Inst. of Oncoradiology)

    1980-09-01

    The skin exposure rates measured on 22 Chaoul units in two consecutive years were compared and their variance was analysed. The statistical fluctuation of the ionization method was 3.1% by a factor of about 2 to 2.5 smaller than the variations due to lack of reproducibility of the Chaoul units. The author observed systematical errors among exposure rate measurement performed at different focus-to-detector distances. The effective source-to-detector distance is different for various ionization chambers. It is the sum of nominal focus-to-detector distance plus a geometrical constant. The geometrical constant is for a particular chamber only to a small extent dependent on the front wall thickness and on the focus-to-detector distance. Sufficient standardization of both calibration procedure and construction of ionization chambers may help in avoiding this effect.

  10. Effect of hygroscopic growth on the aerosol light-scattering coefficient: A review of measurements, techniques and error sources

    Science.gov (United States)

    Titos, G.; Cazorla, A.; Zieger, P.; Andrews, E.; Lyamani, H.; Granados-Muñoz, M. J.; Olmo, F. J.; Alados-Arboledas, L.

    2016-09-01

    Knowledge of the scattering enhancement factor, f(RH), is important for an accurate description of direct aerosol radiative forcing. This factor is defined as the ratio between the scattering coefficient at enhanced relative humidity, RH, to a reference (dry) scattering coefficient. Here, we review the different experimental designs used to measure the scattering coefficient at dry and humidified conditions as well as the procedures followed to analyze the measurements. Several empirical parameterizations for the relationship between f(RH) and RH have been proposed in the literature. These parameterizations have been reviewed and tested using experimental data representative of different hygroscopic growth behavior and a new parameterization is presented. The potential sources of error in f(RH) are discussed. A Monte Carlo method is used to investigate the overall measurement uncertainty, which is found to be around 20-40% for moderately hygroscopic aerosols. The main factors contributing to this uncertainty are the uncertainty in RH measurement, the dry reference state and the nephelometer uncertainty. A literature survey of nephelometry-based f(RH) measurements is presented as a function of aerosol type. In general, the highest f(RH) values were measured in clean marine environments, with pollution having a major influence on f(RH). Dust aerosol tended to have the lowest reported hygroscopicity of any of the aerosol types studied. Major open questions and suggestions for future research priorities are outlined.

  11. Roll-over test--errors in interpretation, due to inaccurate blood pressure measurements.

    Science.gov (United States)

    Schoenfeld, A; Ziv, I; Tzeel, A; Ovadia, J

    1985-01-01

    In order to make the earliest possible prediction of the type of woman likely to develop pregnancy-induced hypertension (PIH), one hundred and ninety-six primigravidas underwent a roll-over test (ROT) during wk 28-32 of their pregnancy. Blood pressure (BP) readings were taken with a standard 12 cm cuff as well as with cuffs adapted to various arm circumferences. We found that the prediction rate of ROT readings with a standard 12 cm cuff was relatively low (38.5%) as compared with Gant's study (94%) (Amer. J. Obstet. Gynec., 120 (1974) 1). When a suitably sized cuff was used, the prediction rate dropped (to 14.7%). Data analysis at term for the whole population of this study shows that, by measuring with a standard 12 cm cuff, 10.2% of the women were found to have PIH, whereas measuring with a suitable cuff showed PIH in only 2.55% of the cases (1:4 ratio). We suggest that the low prediction rates in this and other studies demonstrate that the ROT test is not sufficiently reliable as a tool for predicting which women are liable to develop PIH, but there is definitely enough in it to predict which group will not develop PIH (in this study 89-93%). It has been recommended that ROT be considered only as a test of possible reliability. It should be done according to proper criteria for BP measuring, and a repeat ROT should be considered after several days before starting any kind of treatment. PMID:3979651

  12. Forensic photo/videogrammetry: Monte Carlo simulation of pixel and measurement errors

    Science.gov (United States)

    Bijhold, Jurrien; Geradts, Zeno J.

    1999-02-01

    In this paper, we present some results from a study in progress on methods for the measurement of the length of a robber in a surveillance video image. A calibration tool was constructed for the calibration of the camera. Standard procedures for computing the lens distortion, image projection parameters and the length of the robber have been implemented in Mathematika. These procedures are based on the use of pixel coordinates of markers on the calibration tool, the robber's head (and, optionally, his feet) and an estimation of his position in the coordinate system that is defined by the calibration tool.

  13. Phase shift errors in the theory and practice of surface intensity measurements

    Science.gov (United States)

    Mcgary, M. C.; Crocker, M. J.

    1982-01-01

    The surface acoustical intensity method (sometimes known as the microphone-accelerometer cross-spectral method) is a relatively new noise source/path identification tool. Several researchers have had difficulties implementing this method because of instrumentation phase mis-match. A simple technique for measuring and correcting instrumentation phase mis-match has been developed. This new technique has been tested recently on a noise source identification problem of practical interest. The results of the experiments indicate that the surface acoustic intensity method produces reliable data and can be applied to a variety of noise source/path problems.

  14. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  15. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  16. Measuring software timing errors in the presentation of visual stimuli in cognitive neuroscience experiments.

    Directory of Open Access Journals (Sweden)

    Pablo Garaizar

    Full Text Available Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments.

  17. Comparison and error analysis of remotely measured waveheight by high frequency ground wave radar

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    High frequency ground wave radar (HFGWR) has unique advantage in the survey of dynamical factors, such as sea surface current, sea wave, and sea surface wind in marine conditions in coastal sea area.Compared to marine satellite remote sensing, it involves lower cost, has higher measuring accuracy and spatial resolution and sampling frequency. High frequency ground wave radar is a new land based remote sensing instrument with superior vision and greater application potentials. This paper reviews the development history and application status of high frequency wave radar, introduces its remote-sensing principle and method to inverse offshore fluid, and wave and wind field. Based on the author's "863 Project", this paper recounts comparison and verification of radar remote-sensing value, the physical calibration of radar-measured data and methods to control the quality of radar-sensing data. The authors discuss the precision of radar-sensing data's inversing on offshore fluid field and application of the assimilated data on assimilation.

  18. Galaxy And Mass Assembly (GAMA): autoz spectral redshift measurements, confidence and errors

    CERN Document Server

    Baldry, I K; Bauer, A E; Bland-Hawthorn, J; Brough, S; Cluver, M E; Croom, S M; Davies, L J M; Driver, S P; Gunawardhana, M L P; Holwerda, B W; Hopkins, A M; Kelvin, L S; Liske, J; Lopez-Sanchez, A R; Loveday, J; Norberg, P; Peacock, J; Robotham, A S G; Taylor, E N

    2014-01-01

    The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230000 targets using the Anglo-Australian Telescope. To homogenise the redshift measurements and improve the reliability, a fully automatic redshift code was developed (autoz). The measurements were made using a cross-correlation method for both absorption-line and emission-line spectra. Large deviations in the high-pass filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches onto a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the ...

  19. Effect of the Measured Pulses Count on the Methodical Error of the Air Radio Altimeter

    Directory of Open Access Journals (Sweden)

    Ján Labun

    2010-04-01

    Full Text Available Radio altimeters are based on the principle of radio location of the earth’ssurface using a frequency-modulated standing wave. The relatively simple method ofmeasurement consists in the evaluation of the number of pulses generated as resulting fromthe mixing of the transmitted and received signals. Such a change in the number ofmodulated pulses within a certain altitude interval, however, is not so simple and is adeterminant issue in defining the precision of the radio altimeter. Being knowledgeable ofthis law in a wider context enables us to enter into discussion on the possibilities of furtherincreasing the precision of measuring low altitudes. The article deals with the lawunderlying the change in the number of radio altimeter pulses with the changing altitudemeasured.

  20. Effects of three heavy metals on the bacteria growth kinetics. A bivariate model for toxicological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rial, Diego; Vazquez, Jose Antonio; Murado, Miguel Anxo [Instituto de Investigacions Marinas (CSIC), Vigo (ES). Grupo de Reciclado y Valorizacion de Materiales Residuales (REVAL)

    2011-05-15

    The effects of three heavy metals (Co, Ni and Cd) on the growth kinetics of five bacterial strains with different characteristics (Pseudomonas sp., Phaeobacter sp. strain 27-4, Listonella anguillarum, Carnobacterium piscicola and Leuconostoc mesenteroides subsp. lysis) were studied in a batch system. A bivariate model, function of time and dose, is proposed to describe simultaneously all the kinetic profiles obtained by incubating a microorganism at increasing concentrations of individual metals. This model combines the logistic equation for describing growth, with a modification of the cumulative Weibull's function for describing the dose-dependent variations of growth parameters. The comprehensive model thus obtained - which minimizes the effects of the experimental error - was statistically significant in all the studied cases, and it raises doubts about toxicological evaluations that are based on a single growth parameter, especially if it is not obtained from a kinetic equation. In lactic acid bacteria cultures (C. piscicola and L. mesenteroides), Cd induced remarkable differences in yield and time course of characteristic metabolites. A global parameter is defined (ED{sub 50,{tau}}: dose of toxic chemical that reduces the biomass of a culture by 50% compared to that produced by the control at the time corresponding to its semi maximum biomass) that allows comparing toxic effects on growth kinetics using a single value. (orig.)