WorldWideScience

Sample records for error estimates double

  1. Error estimation for pattern recognition

    CERN Document Server

    Braga Neto, U

    2015-01-01

    This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems

  2. Frequentist Standard Errors of Bayes Estimators.

    Science.gov (United States)

    Lee, DongHyuk; Carroll, Raymond J; Sinha, Samiran

    2017-09-01

    Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo (MCMC) can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.

  3. Estimation of analysis and forecast error variances

    Directory of Open Access Journals (Sweden)

    Malaquias Peña

    2014-11-01

    Full Text Available Accurate estimates of error variances in numerical analyses and forecasts (i.e. difference between analysis or forecast fields and nature on the resolved scales are critical for the evaluation of forecasting systems, the tuning of data assimilation (DA systems and the proper initialisation of ensemble forecasts. Errors in observations and the difficulty in their estimation, the fact that estimates of analysis errors derived via DA schemes, are influenced by the same assumptions as those used to create the analysis fields themselves, and the presumed but unknown correlation between analysis and forecast errors make the problem difficult. In this paper, an approach is introduced for the unbiased estimation of analysis and forecast errors. The method is independent of any assumption or tuning parameter used in DA schemes. The method combines information from differences between forecast and analysis fields (‘perceived forecast errors’ with prior knowledge regarding the time evolution of (1 forecast error variance and (2 correlation between errors in analyses and forecasts. The quality of the error estimates, given the validity of the prior relationships, depends on the sample size of independent measurements of perceived errors. In a simulated forecast environment, the method is demonstrated to reproduce the true analysis and forecast error within predicted error bounds. The method is then applied to forecasts from four leading numerical weather prediction centres to assess the performance of their corresponding DA and modelling systems. Error variance estimates are qualitatively consistent with earlier studies regarding the performance of the forecast systems compared. The estimated correlation between forecast and analysis errors is found to be a useful diagnostic of the performance of observing and DA systems. In case of significant model-related errors, a methodology to decompose initial value and model-related forecast errors is also

  4. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  5. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  6. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  7. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  8. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  9. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  10. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  11. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  12. Efficient error estimation in quantum key distribution

    Science.gov (United States)

    Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu

    2015-01-01

    In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).

  13. Double Checking for Two Error Types

    NARCIS (Netherlands)

    Raats, V.M.; Moors, J.J.A.

    2000-01-01

    Auditing a large population of recorded values is usually done by means of sampling.Based on the number of incorrect records that is detected in the sample, a point estimate and a confidence limit for the population fraction of incorrect values can be determined.In general it is (implicitly) assumed

  14. Ultraspectral Sounding Retrieval Error Budget and Estimation

    Science.gov (United States)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  15. Determination of Parameter Estimation Errors Due to Noise and Undermodelling

    DEFF Research Database (Denmark)

    Knudsen, Morten

    1996-01-01

    A simple method for determination of the estimation error of physical parameters due to noise and undermodelling is developed.......A simple method for determination of the estimation error of physical parameters due to noise and undermodelling is developed....

  16. Rigorous Error Estimates for Reynolds' Lubrication Approximation

    Science.gov (United States)

    Wilkening, Jon

    2006-11-01

    Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ɛ approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

  17. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  18. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  19. A posteriori pointwise error estimates for the boundary element method

    Energy Technology Data Exchange (ETDEWEB)

    Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  20. Estimating IMU heading error from SAR images.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  1. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  2. Estimating errors in least-squares fitting

    Science.gov (United States)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  3. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Science.gov (United States)

    Wang, Xiao-Feng; Wang, Bin

    2011-01-01

    Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

  4. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  5. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  6. Error Estimation for Indoor 802.11 Location Fingerprinting

    DEFF Research Database (Denmark)

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene

    2009-01-01

    802.11-based indoor positioning systems have been under research for quite some time now. However, despite the large attention this topic has gained, most of the research focused on the calculation of position estimates. In this paper, we go a step further and investigate how the position error...... that is inherent to 802.11-based positioning systems can be estimated.  Knowing the position error is crucial for many applications that rely on position information: End users could be informed about the estimated position error to avoid frustration in case the system gives faulty position information. Service...... providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position error...

  7. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  8. Bootstrap Estimates of Standard Errors in Generalizability Theory

    Science.gov (United States)

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  9. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    Science.gov (United States)

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  10. On the dipole approximation with error estimates

    Science.gov (United States)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  11. Estimation of Total Error in DWPF Reported Radionuclide Inventories

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    This report investigates the impact of random errors due to measurement and sampling on the reported concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch. The objective of this investigation is to estimate the variance of the total error in reporting these radionuclide concentrations

  12. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    Science.gov (United States)

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  14. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  15. Consistent estimation of linear panel data models with measurement error

    NARCIS (Netherlands)

    Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas

    2017-01-01

    Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the

  16. Reference-free error estimation for multiple measurement methods.

    Science.gov (United States)

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  17. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  18. Robust estimation of errors-in-variables models using M-estimators

    Science.gov (United States)

    Guo, Cuiping; Peng, Junhuan

    2017-07-01

    The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.

  19. Estimation of rod scale errors in geodetic leveling

    Science.gov (United States)

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  20. Complementarity based a posteriori error estimates and their properties

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2033-2046 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : error majorant * a posteriori error estimates * method of hypercircle Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2012 http://www.sciencedirect.com/science/article/pii/S0378475411001509

  1. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    Science.gov (United States)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-08-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  2. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    Science.gov (United States)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  3. Verification of unfold error estimates in the unfold operator code

    International Nuclear Information System (INIS)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. copyright 1997 American Institute of Physics

  4. Standard errors: A review and evaluation of standard error estimators using Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Bradley Harding

    2014-09-01

    Full Text Available Characteristics of a population are often unknown. To estimate such characteristics, random sampling must be used. Sampling is the process by which a subgroup of a population is examined in order to infer the values of the population's true characteristics. Estimates based on samples are approximations of the population's true value; therefore, it is often useful to know the reliability of such estimates. Standard errors are measures of reliability of a given sample's descriptive statistics with respect to the population's true values. This article reviews some widely used descriptive statistics as well as their standard error estimators and their confidence intervals. The statistics discussed are: the arithmetic mean, the median, the geometric mean, the harmonic mean, the variance, the standard deviation, the median absolute deviation, the quantile, the interquartile range, the skewness, as well as the kurtosis. Evaluations using Monte-Carlo simulations show that standard errors estimators, assuming a normally distributed population, are almost always reliable. In addition, as expected, smaller sample sizes lead to less reliable results. The only exception is the estimate of the confidence interval for kurtosis, which shows evidence of unreliability. We therefore propose an alternative measure of confidence interval based on the lognormal distribution. This review provides easy to find information about many descriptive statistics which can be used, for example, to plot error bars or confidence intervals.

  5. Error estimates for CCMP ocean surface wind data sets

    Science.gov (United States)

    Atlas, R. M.; Hoffman, R. N.; Ardizzone, J.; Leidner, S.; Jusem, J.; Smith, D. K.; Gombos, D.

    2011-12-01

    The cross-calibrated, multi-platform (CCMP) ocean surface wind data sets are now available at the Physical Oceanography Distributed Active Archive Center from July 1987 through December 2010. These data support wide-ranging air-sea research and applications. The main Level 3.0 data set has global ocean coverage (within 78S-78N) with 25-kilometer resolution every 6 hours. An enhanced variational analysis method (VAM) quality controls and optimally combines multiple input data sources to create the Level 3.0 data set. Data included are all available RSS DISCOVER wind observations, in situ buoys and ships, and ECMWF analyses. The VAM is set up to use the ECMWF analyses to fill in areas of no data and to provide an initial estimate of wind direction. As described in an article in the Feb. 2011 BAMS, when compared to conventional analyses and reanalyses, the CCMP winds are significantly different in some synoptic cases, result in different storm statistics, and provide enhanced high-spatial resolution time averages of ocean surface wind. We plan enhancements to produce estimated uncertainties for the CCMP data. We will apply the method of Desroziers et al. for the diagnosis of error statistics in observation space to the VAM O-B, O-A, and B-A increments. To isolate particular error statistics we will stratify the results by which individual instruments were used to create the increments. Then we will use cross-validation studies to estimate other error statistics. For example, comparisons in regions of overlap for VAM analyses based on SSMI and QuikSCAT separately and together will enable estimating the VAM directional error when using SSMI alone. Level 3.0 error estimates will enable construction of error estimates for the time averaged data sets.

  6. Macroscopic Traffic State Estimation: Understanding Traffic Sensing Data-Based Estimation Errors

    Directory of Open Access Journals (Sweden)

    Paul B. C. van Erp

    2017-01-01

    Full Text Available Traffic state estimation is a crucial element in traffic management systems and in providing traffic information to road users. In this article, we evaluate traffic sensing data-based estimation error characteristics in macroscopic traffic state estimation. We consider two types of sensing data, that is, loop-detector data and probe speed data. These data are used to estimate the mean speed in a discrete space-time mesh. We assume that there are no errors in the sensing data. This allows us to study the errors resulting from the differences in characteristics between the sensing data and desired estimate together with the incomplete description of the relation between the two. The aim of the study is to evaluate the dependency of this estimation error on the traffic conditions and sensing data characteristics. For this purpose, we use microscopic traffic simulation, where we compare the estimates with the ground truth using Edie’s definitions. The study exposes a relation between the error distribution characteristics and traffic conditions. Furthermore, we find that it is important to account for the correlation between individual probe data-based estimation errors. Knowledge related to these estimation errors contributes to making better use of the available sensing data in traffic state estimation.

  7. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  8. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  9. Development of an integrated system for estimating human error probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

  10. Computational Error Estimate for the Power Series Solution of Odes ...

    African Journals Online (AJOL)

    This paper compares the error estimation of power series solution with recursive Tau method for solving ordinary differential equations. From the computational viewpoint, the power series using zeros of Chebyshevpolunomial is effective, accurate and easy to use. Keywords: Lanczos Tau method, Chebyshev polynomial, ...

  11. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  12. Measurement variability error for estimates of volume change

    Science.gov (United States)

    James A. Westfall; Paul L. Patterson

    2007-01-01

    Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...

  13. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  14. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  15. A Prediction Error Estimator for Nonlinear Stochastic Systems

    OpenAIRE

    Leontaritis, I.J.; Billings, S.A.

    1986-01-01

    A prediction error estimation algorithm incorporating model selection and validation techniques is developed for a class of multivariable discrete time stochastic nonlinear systems which can be represented by the NARMAX (Nonlinear AutoRegressive Moving Average Model with eXogenous inputs)

  16. A posteriori error estimates for axisymmetric and nonlinear problems

    Czech Academy of Sciences Publication Activity Database

    Křížek, Michal; Němec, J.; Vejchodský, Tomáš

    2001-01-01

    Roč. 15, - (2001), s. 219-236 ISSN 1019-7168 R&D Projects: GA ČR GA201/01/1200; GA MŠk ME 148 Keywords : weigted Sobolev spaces%a posteriori error estimates%finite elements Subject RIV: BA - General Mathematics Impact factor: 0.886, year: 2001

  17. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  18. Double symbol error rates for differential detection of narrow-band FM

    Science.gov (United States)

    Simon, M. K.

    1985-01-01

    This paper evaluates the double symbol error rate (average probability of two consecutive symbol errors) in differentially detected narrow-band FM. Numerical results are presented for the special case of MSK with a Gaussian IF receive filter. It is shown that, not unlike similar results previously obtained for the single error probability of such systems, large inaccuracies in predicted performance can occur when intersymbol interference is ignored.

  19. Background error covariance estimation for atmospheric CO2 data assimilation

    Science.gov (United States)

    Chatterjee, Abhishek; Engelen, Richard J.; Kawa, Stephan R.; Sweeney, Colm; Michalak, Anna M.

    2013-09-01

    any data assimilation framework, the background error covariance statistics play the critical role of filtering the observed information and determining the quality of the analysis. For atmospheric CO2 data assimilation, however, the background errors cannot be prescribed via traditional forecast or ensemble-based techniques as these fail to account for the uncertainties in the carbon emissions and uptake, or for the errors associated with the CO2 transport model. We propose an approach where the differences between two modeled CO2 concentration fields, based on different but plausible CO2 flux distributions and atmospheric transport models, are used as a proxy for the statistics of the background errors. The resulting error statistics: (1) vary regionally and seasonally to better capture the uncertainty in the background CO2 field, and (2) have a positive impact on the analysis estimates by allowing observations to adjust predictions over large areas. A state-of-the-art four-dimensional variational (4D-VAR) system developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to illustrate the impact of the proposed approach for characterizing background error statistics on atmospheric CO2 concentration estimates. Observations from the Greenhouse gases Observing SATellite "IBUKI" (GOSAT) are assimilated into the ECMWF 4D-VAR system along with meteorological variables, using both the new error statistics and those based on a traditional forecast-based technique. Evaluation of the four-dimensional CO2 fields against independent CO2 observations confirms that the performance of the data assimilation system improves substantially in the summer, when significant variability and uncertainty in the fluxes are present.

  20. Human error probability estimation using licensee event reports

    International Nuclear Information System (INIS)

    Voska, K.J.; O'Brien, J.N.

    1984-07-01

    Objective of this report is to present a method for using field data from nuclear power plants to estimate human error probabilities (HEPs). These HEPs are then used in probabilistic risk activities. This method of estimating HEPs is one of four being pursued in NRC-sponsored research. The other three are structured expert judgment, analysis of training simulator data, and performance modeling. The type of field data analyzed in this report is from Licensee Event reports (LERs) which are analyzed using a method specifically developed for that purpose. However, any type of field data or human errors could be analyzed using this method with minor adjustments. This report assesses the practicality, acceptability, and usefulness of estimating HEPs from LERs and comprehensively presents the method for use

  1. Optimizing Neural Network Architectures Using Generalization Error Estimators

    DEFF Research Database (Denmark)

    Larsen, Jan

    1994-01-01

    This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...

  2. Evaluation of human error estimation for nuclear power plants

    International Nuclear Information System (INIS)

    Haney, L.N.; Blackman, H.S.

    1987-01-01

    The dominant risk for severe accident occurrence in nuclear power plants (NPPs) is human error. The US Nuclear Regulatory Commission (NRC) sponsored an evaluation of Human Reliability Analysis (HRA) techniques for estimation of human error in NPPs. Twenty HRA techniques identified by a literature search were evaluated with criteria sets designed for that purpose and categorized. Data were collected at a commercial NPP with operators responding in walkthroughs of four severe accident scenarios and full scope simulator runs. Results suggest a need for refinement and validation of the techniques. 19 refs

  3. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    Science.gov (United States)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  4. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  5. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    Science.gov (United States)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  6. GPS/DR Error Estimation for Autonomous Vehicle Localization

    Directory of Open Access Journals (Sweden)

    Byung-Hyun Lee

    2015-08-01

    Full Text Available Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  7. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    Science.gov (United States)

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  8. A posteriori error estimates for two-phase obstacle problem

    Czech Academy of Sciences Publication Activity Database

    Repin, S.; Valdman, Jan

    2015-01-01

    Roč. 107, č. 2 (2015), s. 324-335 ISSN 1072-3374 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : two-phase obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2015/MTR/valdman-0444082.pdf

  9. Estimation of GRACE observation error covariance in wavelet domain

    Science.gov (United States)

    Behzadpour, Saniya; Mayer-Gürr, Torsten; Flury, Jakob; Goswami, Sujata

    2017-04-01

    We present a wavelet-based error covariance estimator in the GRACE gravity parameter estimation procedure and study its impact on the recovered gravity field solutions based on the ITSG-Grace2016 scheme. So far, stationarity was the main assumption in modelling the noise in range rate observations and a stationary covariance function was used in the observation whitening (decorrelation) step performed before the least-squares adjustment. We have shown this assumption is violated as the noise has time-variable behaviour and should be modelled in the framework of non-stationary stochastic processes. The Discrete Wavelet Transform (DWT) is of particular interest for analysis of non-stationary and transient time series. This transform operates unconditional of the input process type and tends to achieve the desirable decorrelating property for a large class of stochastic processes, including stationary random processes and some non-stationary random processes such as fractional Brownian motions and fractionally differenced processes. In order to perform the gravity parameter estimation in wavelet domain, both observation and design matrices are transformed by a discrete wavelet transform. In this case, the dense variance-covariance matrix of the noise is diagonalized by exploiting the decorrelation property of the transform. Implementation of gravity parameter estimation in wavelet domain, estimation of the empirical error covariance matrix using the residual coefficients, and comparison of the results with the ITSG-Grace2016 solution will be discussed.

  10. Error bounds for surface area estimators based on Crofton's formula

    DEFF Research Database (Denmark)

    Kiderlen, Markus; Meschenmoser, Daniel

    2009-01-01

    According to Crofton’s formula, the surface area S(A) of a sufficiently regular compact set A in R^d is proportional to the mean of all total projections pA (u) on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u) is only measured in k directio...... in the sense that the relative error of the surface area estimator is very close to the minimal error....... and the mean is approximated by a finite weighted sum S(A) of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule), and show that the relative error...... S (A)/S (A) is bounded from below by the inradius of Z and from above by the circumradius of Z. Applying a strengthened isoperimetric inequality due to Bonnesen, we show that the rectangular quadrature rule does not give the best possible error bounds for d = 2. In addition, we derive asymptotic...

  11. Close-range radar rainfall estimation and error analysis

    Science.gov (United States)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge

  12. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  13. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    Science.gov (United States)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  14. A Novel Nonparametric Distance Estimator for Densities with Error Bounds

    Directory of Open Access Journals (Sweden)

    Alexandre R.F. Carvalho

    2013-05-01

    Full Text Available The use of a metric to assess distance between probability densities is an important practical problem. In this work, a particular metric induced by an α-divergence is studied. The Hellinger metric can be interpreted as a particular case within the framework of generalized Tsallis divergences and entropies. The nonparametric Parzen’s density estimator emerges as a natural candidate to estimate the underlying probability density function, since it may account for data from different groups, or experiments with distinct instrumental precisions, i.e., non-independent and identically distributed (non-i.i.d. data. However, the information theoretic derived metric of the nonparametric Parzen’s density estimator displays infinite variance, limiting the direct use of resampling estimators. Based on measure theory, we present a change of measure to build a finite variance density allowing the use of resampling estimators. In order to counteract the poor scaling with dimension, we propose a new nonparametric two-stage robust resampling estimator of Hellinger’s metric error bounds for heterocedastic data. The approach presents very promising results allowing the use of different covariances for different clusters with impact on the distance evaluation.

  15. Erasing errors due to alignment ambiguity when estimating positive selection.

    Science.gov (United States)

    Redelings, Benjamin

    2014-08-01

    Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments. © The Author 2014. Published by Oxford University Press on

  16. A Posteriori Error Estimates Including Algebraic Error and Stopping Criteria for Iterative Solvers

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Strakoš, Zdeněk; Vohralík, M.

    2010-01-01

    Roč. 32, č. 3 (2010), s. 1567-1590 ISSN 1064-8275 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GP201/09/P464 Institutional research plan: CEZ:AV0Z10300504 Keywords : second-order elliptic partial differential equation * finite volume method * a posteriori error estimates * iterative methods for linear algebraic systems * conjugate gradient method * stopping criteria Subject RIV: BA - General Mathematics Impact factor: 3.016, year: 2010

  17. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    Science.gov (United States)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  18. Are Low-order Covariance Estimates Useful in Error Analyses?

    Science.gov (United States)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  19. Estimation of error components in a multi-error linear regression model, with an application to track fitting

    International Nuclear Information System (INIS)

    Fruehwirth, R.

    1993-01-01

    We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)

  20. A Frequency-Domain Adaptive Filter (FDAF) Prediction Error Method (PEM) Framework for Double-Talk-Robust Acoustic Echo Cancellation

    DEFF Research Database (Denmark)

    Gil-Cacho, Jose M.; van Waterschoot, Toon; Moonen, Marc

    2014-01-01

    In this paper, we propose a new framework to tackle the double-talk (DT) problem in acoustic echo cancellation (AEC). It is based on a frequency-domain adaptive filter (FDAF) implementation of the so-called prediction error method adaptive filtering using row operations (PEM-AFROW) leading...... to the FDAF-PEM-AFROW algorithm. We show that FDAF-PEM-AFROW is by construction related to the best linear unbiased estimate (BLUE) of the echo path. We depart from this framework to show an improvement in performance with respect to other adaptive filters minimizing the BLUE criterion, namely the PEM...

  1. Significance of life table estimates for small populations: Simulation-based study of estimation errors

    Directory of Open Access Journals (Sweden)

    Sergei Scherbov

    2011-03-01

    Full Text Available We study bias, standard errors, and distributions of characteristics of life tables for small populations. Theoretical considerations and simulations show that statistical efficiency of different methods is, above all, affected by the population size. Yet it is also significantly affected by the life table construction method and by a population's age composition. Study results are presented in the form of ready-to-use tables and relations, which may be useful in assessing the significance of estimates and differences in life expectancy across time and space for the territories with a small population size, when standard errors of life expectancy estimates may be high.

  2. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  3. Detection of overlay error in double patterning gratings using phase-structured illumination.

    Science.gov (United States)

    Peterhänsel, Sandy; Gödecke, Maria Laura; Paz, Valeriano Ferreras; Frenner, Karsten; Osten, Wolfgang

    2015-09-21

    With the help of simulations we study the benefits of using coherent, phase-structured illumination to detect the overlay error in resist gratings fabricated by double patterning. Evaluating the intensity and phase distribution along the focused spot of a high numerical aperture microscope, the capability of detecting magnitude and direction of overlay errors in the range of a few nanometers is investigated for a wide range of gratings. Furthermore, two measurement approaches are presented and tested for their reliability in the presence of white Gaussian noise.

  4. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  5. Double-Layer Compressive Sensing Based Efficient DOA Estimation in WSAN with Block Data Loss.

    Science.gov (United States)

    Sun, Peng; Wu, Liantao; Yu, Kai; Shao, Huajie; Wang, Zhi

    2017-07-22

    Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a catastrophic effect on DOA estimation performance in WSAN. In this paper, we propose a double-layer compressive sensing (CS) framework to eliminate the hazards of block data loss, to achieve high accuracy and efficient DOA estimation. In addition to modeling the random packet loss during transmission as a passive CS process, an active CS procedure is introduced at each array sensor to further enhance the robustness of transmission. Furthermore, to avoid the error propagation from signal recovery to DOA estimation in conventional methods, we propose a direct DOA estimation technique under the double-layer CS framework. Leveraging a joint frequency and spatial domain sparse representation of the sensor array data, the fusion center (FC) can directly obtain the DOA estimation results according to the received data packets, skipping the phase of signal recovery. Extensive simulations demonstrate that the double-layer CS framework can eliminate the adverse effects induced by block data loss and yield a superior DOA estimation performance in WSAN.

  6. Double Ballbar Measurement for Identifying Kinematic Errors of Rotary Axes on Five-Axis Machine Tools

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available This paper proposes a novel measuring method which uses double ballbar (DBB to inspect the kinematic errors of the rotary axes of five-axis machine tool. In this study, kinematic error mathematical model is firstly established based on the analysis of the rotary axes errors which originated from five-axis machine tools. In the simulation, working conditions considering different error origins are simulated to find the relationship between the DBB measuring patterns and the kinematic errors. In the measuring experiment, the machine rotary axes move simultaneously along a specified circular path while all the linear axes are kept stationary. The original DBB measuring data are processed to draw the measuring patterns in the polar plots which can be employed to observe and identify the kinematic errors. Rotary error compensation is implemented based on the function of external machine origin shift. Both the simulation and the experiment results show the convenience and effectiveness of the proposed measuring method as well as its operability as a calibration method of five-axis machine tools.

  7. On GPS Water Vapour estimation and related errors

    Science.gov (United States)

    Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha

    2010-05-01

    Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly

  8. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  9. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  10. Detecting Positioning Errors and Estimating Correct Positions by Moving Window

    Science.gov (United States)

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  11. CO2 flux estimation errors associated with moist atmospheric processes

    Science.gov (United States)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-07-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43 ± 0.35 PgC yr-1). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  12. An error estimation procedure for plate bending elements

    Science.gov (United States)

    Dow, John O.; Byrd, Doyle E.

    1988-01-01

    Procedures for identifying and eliminating errors inherent in individual finite elements and those due to the discretization of the continuum are presented. The elemental errors are identified through the use of an element formulation procedure based on physically interpretable strain gradient interpolation functions. The use of physically interpretable notation allows these errors to be eliminated using rational arguments. The discretization errors are identified by comparing the finite-element solution with a smoothed superconvergent solution. The errors thus identified are used to guide an adaptive mesh refinement procedure which produces improved results.

  13. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    Science.gov (United States)

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  14. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  15. Radiographer and radiologist perception error in reporting double contrast barium enemas: A pilot study

    International Nuclear Information System (INIS)

    Booth, Alison M.; Mannion, Richard A.J.

    2005-01-01

    Purpose: The practice of radiographers performing double contrast barium enemas (DCBE) is now widespread and in many centres the radiographer's opinion is, at least, contributing to a dual reporting system [Bewell J, Chapman AH. Radiographer performed barium enemas - results of a survey to assess progress. Radiography 1996;2:199-205; Leslie A, Virjee JP. Detection of colorectal carcinoma on double contrast barium enema when double reporting is routinely performed: an audit of current practice. Clin Radiol 2001;57:184-7; Culpan DG, Mitchell AJ, Hughes S, Nutman M, Chapman AH. Double contrast barium enema sensitivity: a comparison of studies by radiographers and radiologists. Clin Radiol 2002;57:604-7]. To ensure this change in practice does not lead to an increase in reporting errors, this study aimed to compare the perception abilities of radiographers with those of radiologists. Methods: Three gastro-intestinal (GI) radiographers and three consultant radiologists independently reported on a selection of 50 DCBE examinations, including the level of certainty in their comments for each examination. A blinded comparison of the results with an independent 'standard report' was recorded. Results: The results demonstrate there was no significant difference in perception error for any of the levels of certainty, for single reporting, for double reading by a radiographer/radiologist or by two radiologists. Conclusions: The study shows that radiographers can perceive abnormalities on DCBE at similar sensitivities and specificities as radiologists. While the participants in the study may be typical of a district general hospital, the nature of the study gives it limited external validity. As a pilot, the results demonstrate that, with slight modification, the methodology could be used for a larger study

  16. Robust double gain unscented Kalman filter for small satellite attitude estimation

    Science.gov (United States)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  17. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  18. Mask contribution on CD and OVL errors budgets for double patterning lithography

    Science.gov (United States)

    Servin, I.; Lapeyre, C.; Barnola, S.; Connolly, B.; Ploss, R.; Nakagawa, K.; Buck, P.; McCallum, M.

    2009-01-01

    Double Patterning Technology (DPT) is now considered as the mainstream technology for 32 nm node lithography. The main DPT processes have been developed according targeted applications: spacer and pitch splitting either by dual line or dual trench approaches. However, the successful implementation of DPT requires overcoming certain technical challenges in terms of exposure tool capability, process integration, mask performance and finally metrology (1, 2). For pitch splitting process, the mask performance becomes critical as the technique requires a set of two masks (3). This paper will focus on the mask impact to the global critical dimension (CD) and overlay (OVL) errors for DPT. The mask long-distance and local off target CD variation and image placement were determined on DP features at 180 nm and 128 nm pitches, dedicated to 45 nm and 32 nm nodes respectively. The mask data were then compared to the wafer CD and OVL results achieved on same DP patterns. Edge placement errors have been programmed on DP like-structures on reticle in order to investigate the offsets impact on CD and image placement. The CD lines increases with asymmetric spaces adjacent to the drawn lines for offsets higher than 12 nm, and then have been compared to the corresponding density induced by individual dense and sparse symmetric edges and have been correlated to the simulated prediction. The single reticle trans-X offsets were then compared to the impact on CD by OVL errors in the double patterning strategy. Finally, the pellicle-induced reticle distortions impact on image placement errors was investigated (4). The mechanical performance of pellicle was achieved by mask registration measurements before and after pellicle removal. The reticle contribution to the overall wafer CD and OVL errors budgets were addressed to meet the ITRS requirements.

  19. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  20. Tensor estimation for double-pulsed diffusional kurtosis imaging.

    Science.gov (United States)

    Shaw, Calvin B; Hui, Edward S; Helpern, Joseph A; Jensen, Jens H

    2017-07-01

    Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Prediction of position estimation errors for 3D target trajetories estimated from cone-beam CT projections

    DEFF Research Database (Denmark)

    Poulsen, Per Rugaard; Cho, Byungchul; Keall, Paul

    2010-01-01

    . The mathematical formalism of the method includes an individualized measure of the position estimation error in terms of an estimated 1D Gaussian distribution for the unresolved target position[2]. The present study investigates how well this 1D Gaussian predicts the actual distribution of position estimation...... errors. Over 5000 CBCT acquisitions were simulated from a 46-patient thoracic/abdominal and a 17-patient prostate tumor motion database. The 1D Gaussian predicted the actual root-mean-square and 95th percentile of the position estimation error with mean errors ≤0.04mm and maximum errors ≤0.48mm....... This finding indicates that individualized root-mean-square errors and 95% confidence intervals can be applied reliably to the estimated target trajectories....

  2. Error estimates for extrapolations with matrix-product states

    Science.gov (United States)

    Hubig, C.; Haegeman, J.; Schollwöck, U.

    2018-01-01

    We introduce an error measure for matrix-product states without requiring the relatively costly two-site density-matrix renormalization group (2DMRG). This error measure is based on an approximation of the full variance 〈ψ |(Ĥ-E ) 2|ψ 〉 . When applied to a series of matrix-product states at different bond dimensions obtained from a single-site density-matrix renormalization group (1DMRG) calculation, it allows for the extrapolation of observables towards the zero-error case representing the exact ground state of the system. The calculation of the error measure is split into a sequential part of cost equivalent to two calculations of 〈ψ |H ̂|ψ 〉 and a trivially parallelized part scaling like a single operator application in 2DMRG. The reliability of this error measure is demonstrated by four examples: the L =30 ,S =1 /2 Heisenberg chain, the L =50 Hubbard chain, an electronic model with long-range Coulomb-like interactions, and the Hubbard model on a cylinder with a size of 10 ×4 . Extrapolation in this error measure is shown to be on par with extrapolation in the 2DMRG truncation error or the full variance 〈ψ |(Ĥ-E ) 2|ψ 〉 at a fraction of the computational effort.

  3. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  4. On the a priori estimation of collocation error covariance functions: a feasibility study

    DEFF Research Database (Denmark)

    Arabelos, D.N.; Forsberg, René; Tscherning, C.C.

    2007-01-01

    Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order ...... error covariance estimates and given features of the input data we investigate the possibility of a straightforward estimation of error covariance functions exploiting known characteristics of the observations. The experiments using gravity anomalies for the computation of geoid heights...

  5. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  6. On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion

    Directory of Open Access Journals (Sweden)

    Jibo Wu

    2013-01-01

    Full Text Available Wu (2013 proposed an estimator, principal component Liu-type estimator, to overcome multicollinearity. This estimator is a general estimator which includes ordinary least squares estimator, principal component regression estimator, ridge estimator, Liu estimator, Liu-type estimator, r-k class estimator, and r-d class estimator. In this paper, firstly we use a new method to propose the principal component Liu-type estimator; then we study the superior of the new estimator by using the scalar mean squares error criterion. Finally, we give a numerical example to show the theoretical results.

  7. Subroutine library for error estimation of matrix computation (Ver. 1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Shizawa, Yoshihisa; Kishida, Norio

    1999-03-01

    'Subroutine Library for Error Estimation of Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the linear system's solutions or the Hermitian matrices' eigenvalues. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calculate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The subroutines for error estimation of Hermitian matrix eigenvalues derive the error ranges of the eigenvalues according to the Korn-Kato's formula. The test matrix generators supply the matrices appeared in the mathematical research, the ones randomly generated and the ones appeared in the application programs. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  8. A posteriori error estimation of H1 mixed finite element method for the Benjamin-Bona-Mahony equation

    Science.gov (United States)

    Shafie, Sabarina; Tran, Thanh

    2017-08-01

    Error estimations of H1 mixed finite element method for the Benjamin-Bona-Mahony equation are considered. The problem is reformulated into a system of first order partial differential equations, which allows an approximation of the unknown function and its derivative. Local parabolic error estimates are introduced to approximate the true errors from the computed solutions; the so-called a posteriori error estimates. Numerical experiments show that the a posteriori error estimates converge to the true errors of the problem.

  9. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  10. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  11. The effect of retrospective sampling on estimates of prediction error for multifactor dimensionality reduction.

    Science.gov (United States)

    Winham, Stacey J; Motsinger-Reif, Alison A

    2011-01-01

    The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates. © 2010 The Authors Annals of Human Genetics © 2010 Blackwell Publishing Ltd/University College London.

  12. Extent of error in estimating nutrient intakes from food tables versus laboratory estimates of cooked foods.

    Science.gov (United States)

    Chiplonkar, Shashi Ajit; Agte, Vaishali Vilas

    2007-01-01

    Individual cooked foods (104) and composite meals (92) were examined for agreement between nutritive value estimated by indirect analysis (E) (Indian National database of nutrient composition of raw foods, adjusted for observed moisture contents of cooked recipes), and by chemical analysis in our laboratory (M). The extent of error incurred in using food table values with moisture correction for estimating macro as well as micronutrients at food level and daily intake level was quantified. Food samples were analyzed for contents of iron, zinc, copper, beta-carotene, riboflavin, thiamine, ascorbic acid, folic acid and also for macronutrients, phytate and dietary fiber. Mean percent difference in energy content between E and M was 3.07+/-0.6%, that for protein was 5.3+/-2.0%, for fat was 2.6+/-1.8% and for carbohydrates was 5.1+/-0.9%. Mean percent difference in vitamin contents between E and M ranged from 32 (vitamin C) to 45.5% (beta-carotene content); and that for minerals between 5.6 (copper) to 19.8% (zinc). Percent E/M were computed for daily nutrient intakes of 264 apparently healthy adults. These were observed to be 108, 112, 127 and 97 for energy, protein, fat and carbohydrates respectively. Percent E/M for their intakes of copper (102) and beta-carotene (114) were closer to 100 but these were very high in the case of zinc (186), iron (202), and vitamins C (170), thiamine (190), riboflavin (181) and folic acid (165). Estimates based on food composition table values with moisture correction show macronutrients for cooked foods to be within +/- 5% whereas at daily intake levels the error increased up to 27%. The lack of good agreement in the case of several micronutrients indicated that the use of Indian food tables for micronutrient intakes would be inappropriate.

  13. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Science.gov (United States)

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  14. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  15. LiDAR error estimation with WAsP engineering

    DEFF Research Database (Denmark)

    Bingöl, Ferhat; Mann, Jakob; Foussekis, D.

    2008-01-01

    The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR me...... data is compared with the model results. The model results are acceptable and very close for one site while the more complex one is returning higher errors at higher positions and in some wind directions.......DAR measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met. mast data have been collected and the same conditions are simulated with Riso/DTU software, WAsP Engineering 2.0. Finally measurement...

  16. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  17. A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.

    Science.gov (United States)

    Shoemaker, David M.

    Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

  18. Use of doubling doses for the estimation of genetic risks

    International Nuclear Information System (INIS)

    Searle, A.G.

    1977-01-01

    Doubling dose estimates derived from radiation experiments in mice are proving of great value for the assessment of genetic hazards to man from extra radiation exposure because they allow the latest information on mutation frequencies and the incidence of genetic disease in man to be used in the assessment process. The similarity in spectra of 'spontaneous' and induced mutations increases coincidence in the validity of this approach. Data on rates of induction of dominant and recessive mutations, translocations and X-chromosome loss are used to derive doubling doses for chronic exposures to both low and high-LET radiations. Values for γ and X-rays, derived from both male and female germ-cells, fall inside a fairly small range and it is felt that the use of an overall figure of 100 rads is justifiable for protection purposes. Values for neutrons and α-particles, obtained from male germ-cells, varied according to neutron energy etc. but clustered around a value of 5 rads for fission neutrons

  19. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  20. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  1. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    Science.gov (United States)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  2. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  3. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  4. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  5. Improved estimates of coordinate error for molecular replacement

    International Nuclear Information System (INIS)

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-01-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates

  6. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  7. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, Jack L.

    1999-04-21

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  8. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  9. Estimation of the human error probabilities in the human reliability analysis

    International Nuclear Information System (INIS)

    Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei

    2006-01-01

    Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)

  10. Measurement error in income and schooling, and the bias for linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with an estimate of the income returns to schooling....

  11. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with an estimate of the income returns to schooling....

  12. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  13. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  14. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  15. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    -impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... are expressed and in the way the estimator is approximated. The starting point of the first method is prior probability density functions for both signal and noise and it is assumed that their Laplace transforms (moment generating functions) are available. The corresponding posterior mean integral that defines...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...

  16. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    Science.gov (United States)

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio

  17. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  18. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    Science.gov (United States)

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  19. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph

    2016-12-08

    We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.

  20. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  1. On the a priori estimation of collocation error covariance functions: a feasibility study

    DEFF Research Database (Denmark)

    Arabelos, D.N.; Forsberg, René; Tscherning, C.C.

    2007-01-01

    Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order...... for the combination to be optimal and the error estimates of the results realistic. One flexible method for the gravity field approximation is least-squares collocation leading to optimal solutions for the predicted quantities and their error covariance estimates. The drawback of this method is related to the current...... ability of computers in handling very large systems of linear equations produced by an equally large amount of available input data. This problem becomes more serious when error covariance estimates have to be simultaneously computed. Using numerical experiments aiming at revealing dependencies between...

  2. Error estimation in the neural network solution of ordinary differential equations.

    Science.gov (United States)

    Filici, Cristian

    2010-06-01

    In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented. Copyright 2010. Published by Elsevier Ltd.

  3. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  4. A Prediction Error and Stepwise Regression Estimation Algorithm for Nonlinear Systems

    OpenAIRE

    Billings, S.A.; Voon, W.S.F.

    1985-01-01

    The identification of nonlinear systems based on a NARMAX (Nonlinear Autoregressive Moving Average model with exogenous inputs)model representation is considered and a combined stepwise regression/prediction error estimation algorithm is derived.

  5. Estimating SEE Error Rates for Complex SoCs With ASERT

    Science.gov (United States)

    Cabanas-Holmen, Manuel; Cannon, Ethan H.; Amort, Tony; Ballast, Jon; Brees, Roger

    2015-08-01

    This paper describes the ASIC Single Event Effects (SEE) Error Rate Tool (ASERT) methodology to estimate the error rates of complex System-on-Chip (SoC) devices. ASERT consists of a top-down analysis to divide the SoC into sensitive cell groups. The SEE error rate is estimated with a bottom-up calculation summing the contribution of all sensitive cell groups, including derating and utilization factors to account for the probability that a cell-level error has a SoC-level impact. The sensitive cell SEE rates are evaluated using test data from specially designed test structures. Standard rate estimation tools are augmented with novel rate estimation approaches for direct proton upsets and for spatial redundancy.

  6. Systematic error mitigation in multi-GNSS positioning based on semiparametric estimation

    Science.gov (United States)

    Yu, Wenkun; Ding, Xiaoli; Dai, Wujiao; Chen, Wu

    2017-12-01

    Joint use of observations from multiple global navigation satellite systems (GNSS) is advantageous in high-accuracy positioning. However, systematic errors in the observations can significantly impact on the positioning accuracy if such errors cannot be properly mitigated. The errors can distort least squares estimations and also affect the results of variance component estimation that is frequently used to determine the stochastic model when observations from multiple GNSS are used. We present an approach that is based on the concept of semiparametric estimation for mitigating the effects of the systematic errors. Experimental results based on both simulated and real GNSS datasets show that the approach is effective, especially when applied before carrying out variance component estimation.

  7. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  8. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  9. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error.

    Science.gov (United States)

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J

    2017-11-01

    Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.

  10. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  11. Data driven estimation of imputation error-a strategy for imputation with a reject option

    DEFF Research Database (Denmark)

    Bak, Nikolaj; Hansen, Lars Kai

    2016-01-01

    indiscriminately. We note that the effects of imputation can be strongly dependent on what is missing. To help make decisions about which records should be imputed, we propose to use a machine learning approach to estimate the imputation error for each case with missing data. The method is thought...... to be a practical approach to help users using imputation after the informed choice to impute the missing data has been made. To do this all patterns of missing values are simulated in all complete cases, enabling calculation of the "true error" in each of these new cases. The error is then estimated for each case...... with missing values by weighing the "true errors" by similarity. The method can also be used to test the performance of different imputation methods. A universal numerical threshold of acceptable error cannot be set since this will differ according to the data, research question, and analysis method...

  12. How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?

    Science.gov (United States)

    Gebregiorgis, A. S.; Hossain, F.

    2014-12-01

    The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  13. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  14. Error estimation for goal-oriented spatial adaptivity for the SN equations on triangular meshes

    International Nuclear Information System (INIS)

    Lathouwers, D.

    2011-01-01

    In this paper we investigate different error estimation procedures for use within a goal oriented adaptive algorithm for the S N equations on unstructured meshes. The method is based on a dual-weighted residual approach where an appropriate adjoint problem is formulated and solved in order to obtain the importance of residual errors in the forward problem on the specific goal of interest. The forward residuals and the adjoint function are combined to obtain both economical finite element meshes tailored to the solution of the target functional as well as providing error estimates. Various approximations made to make the calculation of the adjoint angular flux more economically attractive are evaluated by comparing the performance of the resulting adaptive algorithm and the quality of the error estimators when applied to two shielding-type test problems. (author)

  15. Corrected-loss estimation for quantile regression with covariate measurement errors.

    Science.gov (United States)

    Wang, Huixia Judy; Stefanski, Leonard A; Zhu, Zhongyi

    2012-06-01

    We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered.

  16. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  17. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    Science.gov (United States)

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  18. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    Science.gov (United States)

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  19. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  20. Investigation of systematic errors and estimation of $\\pi K$ atom lifetime

    CERN Document Server

    Yazkov, Valeriy

    2013-01-01

    This note describes details of analysis of data sample collected by DIRAC experiment on Ni target in 2008-2010 in order to estimate lifetime of $\\pi K$ atoms. Experimental results consists of six distinct data samples: both charge combinations ($\\pi^+K^−$ and $K^+\\pi^−$ atoms) obtained in dierent experimental conditions corresponding to each year of data-taking. Sources of systematic errors are analyzed, and estimations of systematic errors are presented. Taking into account both statistical and systematic uncertainties, the lifetime of $\\pi K$ atoms is estimated by maximum likelihood method.

  1. Estimation of total error in DWPF reported radionuclide inventories. Revision 1

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site is required to determine and report the radionuclide inventory of its glass product. For each macro-batch, the DWPF will report both the total amount (in curies) of each reportable radionuclide and the average concentration (in curies/gram of glass) of each reportable radionuclide. The DWPF is to provide the estimated error of these reported values of its radionuclide inventory as well. The objective of this document is to provide a framework for determining the estimated error in DWPF's reporting of these radionuclide inventories. This report investigates the impact of random errors due to measurement and sampling on the total amount of each reportable radionuclide in a given macro-batch. In addition, the impact of these measurement and sampling errors and process variation are evaluated to determine the uncertainty in the reported average concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch

  2. Hemoglobin-Dilution Method: Effect of Measurement Errors on Vascular Volume Estimation

    Directory of Open Access Journals (Sweden)

    Matthew B. Wolf

    2017-01-01

    Full Text Available The hemoglobin-dilution method (HDM has been used to estimate changes in vascular volumes in patients because direct measurements with radioisotopes are time-consuming and not practical in many facilities. The HDM requires an assumption of initial blood volume, repeated measurements of plasma hemoglobin concentration, and the calculation of the ratio of hemoglobin measurements. The statistics of these ratio distributions resulting from measurement error are ill-defined even when the errors are normally distributed. This study uses a “Monte Carlo” approach to determine the distribution of these errors. The finding was that these errors could be closely approximated with a log-normal distribution that can be parameterized by a geometric mean (X and a dispersion factor (S. When the ratio of successive Hb concentrations is used to estimate blood volume, normally distributed hemoglobin measurement errors tend to produce exponentially higher values of X and S as the SD of the measurement error increases. The longer tail of the distribution to the right could produce much greater overestimations than would be expected from the SD values of the measurement error; however, it was found that averaging duplicate and triplicate hemoglobin measurements on a blood sample greatly improved the accuracy.

  3. An estimate and evaluation of design error effects on nuclear power plant design adequacy

    International Nuclear Information System (INIS)

    Stevenson, J.D.

    1984-01-01

    An area of considerable concern in evaluating Design Control Quality Assurance procedures applied to design and analysis of nuclear power plant is the level of design error expected or encountered. There is very little published data 1 on the level of error typically found in nuclear power plant design calculations and even less on the impact such errors would be expected to have on overall design adequacy of the plant. This paper is concerned with design error associated with civil and mechanical structural design and analysis found in calculations which form part of the Design or Stress reports. These reports are meant to document the design basis and adequacy of the plant. The estimates contained in this paper are based on the personal experiences of the author. In Table 1 is a partial listing of the design docummentation review performed by the author on which the observations contained in this paper are based. In the preparation of any design calculations, it is a utopian dream to presume such calculations can be made error free. The intent of this paper is to define error levels which might be expected in a competent engineering organizations employing currently technically qualified engineers and accepted methods of Design Control. In addition, the effects of these errors on the probability of failure to meet applicable design code requirements also are estimated

  4. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  5. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  6. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Science.gov (United States)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-10-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of

  7. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  8. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo....... This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel...... to cross-sectional scans of the fistulas, the major axis was on average 10.2 mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5 mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather...

  9. Double-Capon and double-MUSICAL for arrival separation and observable estimation in an acoustic waveguide

    Science.gov (United States)

    Touzé, Grégoire Le; Nicolas, Barbara; Mars, Jérôme I.; Roux, Philippe; Oudompheng, Benoit

    2012-12-01

    Recent developments in shallow water ocean acoustic tomography propose the use of an original configuration composed of two source-receiver vertical arrays and wideband sources. The recording space thus has three dimensions, with two spatial dimensions and the frequency dimension. Using this recording space, it is possible to build a three-dimensional (3D) estimation space that gives access to the three observables associated with the acoustic arrivals: the direction of departure, the direction of arrivals, and the time of arrival. The main interest of this 3D estimation space is its capability for the separation of acoustic arrivals that usually interfere in the recording space, due to multipath propagation. A 3D estimator called double beamforming has already been developed, although it has limited resolution. In this study, the new 3D high-resolution estimators of double Capon and double MUSICAL are proposed to achieve this task. The ocean acoustic tomography configuration allows a single recording realization to estimate the cross-spectral data matrix, which is necessary to build high-resolution estimators. 3D smoothing techniques are thus proposed to increase the rank of the matrix. The estimators developed are validated on real data recorded in an ultrasonic tank, and their detection performances are compared to existing 2D and 3D methods.

  10. Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition

    KAUST Repository

    van der Zee, Kristoffer G.

    2010-10-27

    A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.

  11. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  12. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Science.gov (United States)

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  13. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    Science.gov (United States)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  14. Does semantic impairment explain surface dyslexia? VLSM evidence for a double dissociation between regularization errors in reading and semantic errors in picture naming

    Directory of Open Access Journals (Sweden)

    Sara Pillay

    2014-04-01

    adjacent posterior inferior temporal gyrus (blue in figure 1. In contrast, semantic errors during picture naming (red and pink in figure 1 and impaired performance on the semantic matching task (yellow and pink in figure 1 were correlated with more anterior temporal lobe damage and with inferior frontal gyrus involvement. There was substantial overlap between lesion correlates for the two explicit semantic tasks (pink in figure 1, but none between these areas and those correlated with regularization errors. This double dissociation is difficult to accommodate in terms of a common impairment underlying semantic deficits and regularization errors. Lesions in relatively anterior temporal regions appear to produce semantic deficits but not regularization errors, whereas more posterior temporal lesions produce regularization errors but not explicit semantic errors. One possibility is that this posterior temporal region stores whole word representations that do not include semantic information. Alternatively, these representations may include highly abstract and word-specific semantic information useful for computing phonology but not for more complex semantic tasks.

  15. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    , led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface......Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  16. Effect of unrepresented model errors on estimated soil hydraulic material properties

    Directory of Open Access Journals (Sweden)

    S. Jaumann

    2017-09-01

    Full Text Available Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS was forced with a fluctuating groundwater table while time domain reflectometry (TDR and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i sensor position uncertainty, (ii small scale-heterogeneity, and (iii 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii The 1-D study yields biased parameters due to unrepresented lateral flow. (iii Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.

  17. Effect of unrepresented model errors on estimated soil hydraulic material properties

    Science.gov (United States)

    Jaumann, Stefan; Roth, Kurt

    2017-09-01

    Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS) was forced with a fluctuating groundwater table while time domain reflectometry (TDR) and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i) sensor position uncertainty, (ii) small scale-heterogeneity, and (iii) 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i) the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii) The 1-D study yields biased parameters due to unrepresented lateral flow. (iii) Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.

  18. Detecting Topological Errors with Pre-Estimation Filtering of Bad Data in Wide-Area Measurements

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Sørensen, Mads; Jóhannsson, Hjörtur

    2017-01-01

    It is expected that bad data and missing topology information will become an issue of growing concern when power system state estimators are to exploit the high measurement reporting rates from phasor measurement units. This paper suggests to design state estimators with enhanced resilience against...... those issues. The work presented here include a review of a pre-estimation filter for bad data. A method for detecting branch status errors which may also be applied before the state estimation is then proposed. Both methods are evaluated through simulation on a novel test platform for wide......-area measurement applications. It is found that topology errors may be detected even under influence of the large dynamics following the loss of a heavily loaded branch....

  19. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  20. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  1. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  2. A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images

    Science.gov (United States)

    Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong

    2010-10-01

    Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.

  3. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  4. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach.

    Science.gov (United States)

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-06-11

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.

  5. Measurement errors when estimating the vertical jump height with flight time using photocell devices: the example of Optojump.

    Science.gov (United States)

    Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K

    2017-03-01

    Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.

  6. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  7. Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method

    Directory of Open Access Journals (Sweden)

    Li Husheng

    2005-01-01

    Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.

  8. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  9. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  10. Discretization error estimation and exact solution generation using the method of nearby problems.

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise,

  12. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    -response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two...

  13. Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging

    DEFF Research Database (Denmark)

    Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben

    2013-01-01

    When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device wa...

  14. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  15. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  16. Theoretical and Experimental Investigation of Force Estimation Errors Using Active Magnetic Bearings with Embedded Hall Sensors

    DEFF Research Database (Denmark)

    Voigt, Andreas Jauernik; Santos, Ilmar

    2012-01-01

    This paper gives an original theoretical and experimental contribution to the issue of reducing force estimation errors, which arise when applying Active Magnetic Bearings (AMBs) with pole embedded Hall sensors for force quantification purposes. Motivated by the prospect of increasing the usabili...

  17. Estimating root mean square errors in remotely sensed soil moisture over continental scale domains

    NARCIS (Netherlands)

    de Jeu, R.A.M.; Draper, C.; Reichle, R.; Naeimi, V.; Parinussa, R.M.; Wagner, W.W.

    2013-01-01

    Root Mean Square Errors (RMSEs) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using

  18. A Possible Solution for the Problem of Estimating the Error Structure of Global Soil Moisture Datasets

    NARCIS (Netherlands)

    Scipal, K.; Holmes, T.R.H.; de Jeu, R.A.M.; Naeimi, V.; Wagner, W.W.

    2008-01-01

    In the last few years, research made significant progress towards operational soil moisture remote sensing which lead to the availability of several global data sets. For an optimal use of these data, an accurate estimation of the error structure is an important condition. To solve for the

  19. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    Science.gov (United States)

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  20. L∞-error estimate for a system of elliptic quasivariational inequalities

    Directory of Open Access Journals (Sweden)

    M. Boulbrachene

    2003-01-01

    Full Text Available We deal with the numerical analysis of a system of elliptic quasivariational inequalities (QVIs. Under W2,p(Ω-regularity of the continuous solution, a quasi-optimal L∞-convergence of a piecewise linear finite element method is established, involving a monotone algorithm of Bensoussan-Lions type and standard uniform error estimates known for elliptic variational inequalities (VIs.

  1. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  2. Use of attribute association error probability estimates to evaluate quality of medical record geocodes.

    Science.gov (United States)

    Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L

    2015-09-15

    The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the

  3. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  4. Estimating tree biomass regressions and their error, proceedings of the workshop on tree biomass regression functions and their contribution to the error

    Science.gov (United States)

    Eric H. Wharton; Tiberius Cunia

    1987-01-01

    Proceedings of a workshop co-sponsored by the USDA Forest Service, the State University of New York, and the Society of American Foresters. Presented were papers on the methodology of sample tree selection, tree biomass measurement, construction of biomass tables and estimation of their error, and combining the error of biomass tables with that of the sample plots or...

  5. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations

    International Nuclear Information System (INIS)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use

  6. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  7. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  8. Estimation of the optical errors on the luminescence imaging of water for proton beam

    Science.gov (United States)

    Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi

    2018-04-01

    Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.

  9. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  10. FSO channel estimation for OOK modulation with APD receiver over atmospheric turbulence and pointing errors

    Science.gov (United States)

    Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad; Khalighi, Mohammad Ali

    2017-11-01

    In the free-space optical (FSO) links, atmospheric turbulence and pointing errors lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF-keying(OOK) is a popular signaling scheme in these systems. For long-haul FSO links, avalanche photo diodes (APDs) are commonly used, which provide an internal gain in photo-detection, allowing larger transmission ranges, as compared with PIN photo-detector (PD) counterparts. Since optimal OOK detection at the receiver requires the knowledge of the instantaneous channel fading coefficient, channel estimation is an important task that can considerably impact the link performance. In this paper, we investigate the channel estimation issue when using an APD at the receiver. Here, optimal signal detection is quite more delicate than in the case of using a PIN PD. In fact, given that APD-based receivers are usually shot-noise limited, the receiver noise will have a different distribution depending on whether the transmitted bit is '0' or '1', and moreover, its statistics are further affected by the scintillation. To deal with this, we first consider minimum mean-square-error (MMSE), maximum a posteriori probability (MAP) and maximum likelihood (ML) channel estimation over an observation window encompassing several consecutive received OOK symbols. Due to the high computational complexity of these methods, in a second step, we propose an ML channel estimator based on the expectation-maximization (EM) algorithm which has a low implementation complexity, making it suitable for high data-rate FSO communications. Numerical results show that for a sufficiently large observation window, by using the proposed EM channel estimator, we can achieve bit error rate performance very close to that with perfect channel state information. We also derive the Cramer-Rao lower bound (CRLB) of MSE of estimation errors and show that for a large enough observation

  11. Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller

    2014-01-01

    This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...

  12. Estimation of the wind turbine yaw error by support vector machines

    DEFF Research Database (Denmark)

    Sheibat-Othman, Nida; Othman, Sami; Tayari, Raoaa

    2015-01-01

    Wind turbine yaw error information is of high importance in controlling wind turbine power and structural load. Normally used wind vanes are imprecise. In this work, the estimation of yaw error in wind turbines is studied using support vector machines for regression (SVR). As the methodology...... is data-based, simulated data from a high fidelity aero-elastic model is used for learning. The model simulates a variable speed horizontal-axis wind turbine composed of three blades and a full converter. Both partial load (blade angles fixed at 0 deg) and full load zones (active pitch actuators...

  13. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    Science.gov (United States)

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, pchildhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was time spent reading was childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  14. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  15. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  16. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    Science.gov (United States)

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  17. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  18. DTI quality control assessment via error estimation from Monte Carlo simulations

    Science.gov (United States)

    Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

    2013-03-01

    Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

  19. An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2016-08-01

    Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.

  20. Density-preserving sampling: robust and efficient alternative to cross-validation for error estimation.

    Science.gov (United States)

    Budka, Marcin; Gabrys, Bogdan

    2013-01-01

    Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

  1. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Science.gov (United States)

    Casas, R.; Marco, A.; Guerrero, J. J.; Falcó, J.

    2006-12-01

    Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

  2. Estimating shipper/receiver measurement error variances by use of ANOVA

    International Nuclear Information System (INIS)

    Lanning, B.M.

    1993-01-01

    Every measurement made on nuclear material items is subject to measurement errors which are inherent variations in the measurement process that cause the measured value to differ from the true value. In practice, it is important to know the variance (or standard deviation) in these measurement errors, because this indicates the precision in reported results. If a nuclear material facility is generating paired data (e.g., shipper/receiver) where party 1 and party 2 each make independent measurements on the same items, the measurement error variance associated with both parties can be extracted. This paper presents a straightforward method for the use of standard statistical computer packages, with analysis of variance (ANOVA), to obtain valid estimates of measurement variances. Also, with the help of the P-value, significant biases between the two parties can be directly detected without reference to an F-table

  3. Estimation of automobile-driver describing function from highway tests using the double steering wheel

    Science.gov (United States)

    Delp, P.; Crossman, E. R. F. W.; Szostak, H.

    1972-01-01

    The automobile-driver describing function for lateral position control was estimated for three subjects from frequency response analysis of straight road test results. The measurement procedure employed an instrumented full size sedan with known steering response characteristics, and equipped with a lateral lane position measuring device based on video detection of white stripe lane markings. Forcing functions were inserted through a servo driven double steering wheel coupling the driver to the steering system proper. Random appearing, Gaussian, and transient time functions were used. The quasi-linear models fitted to the random appearing input frequency response characterized the driver as compensating for lateral position error in a proportional, derivative, and integral manner. Similar parameters were fitted to the Gabor transformed frequency response of the driver to transient functions. A fourth term corresponding to response to lateral acceleration was determined by matching the time response histories of the model to the experimental results. The time histories show evidence of pulse-like nonlinear behavior during extended response to step transients which appear as high frequency remnant power.

  4. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  5. Robust detection and verification of linear relationships to generate metabolic networks using estimates of technical errors

    Directory of Open Access Journals (Sweden)

    Holschneider Matthias

    2007-05-01

    Full Text Available Abstract Background The size and magnitude of the metabolome, the ratio between individual metabolites and the response of metabolic networks is controlled by multiple cellular factors. A tight control over metabolite ratios will be reflected by a linear relationship of pairs of metabolite due to the flexibility of metabolic pathways. Hence, unbiased detection and validation of linear metabolic variance can be interpreted in terms of biological control. For robust analyses, criteria for rejecting or accepting linearities need to be developed despite technical measurement errors. The entirety of all pair wise linear metabolic relationships then yields insights into the network of cellular regulation. Results The Bayesian law was applied for detecting linearities that are validated by explaining the residues by the degree of technical measurement errors. Test statistics were developed and the algorithm was tested on simulated data using 3–150 samples and 0–100% technical error. Under the null hypothesis of the existence of a linear relationship, type I errors remained below 5% for data sets consisting of more than four samples, whereas the type II error rate quickly raised with increasing technical errors. Conversely, a filter was developed to balance the error rates in the opposite direction. A minimum of 20 biological replicates is recommended if technical errors remain below 20% relative standard deviation and if thresholds for false error rates are acceptable at less than 5%. The algorithm was proven to be robust against outliers, unlike Pearson's correlations. Conclusion The algorithm facilitates finding linear relationships in complex datasets, which is radically different from estimating linearity parameters from given linear relationships. Without filter, it provides high sensitivity and fair specificity. If the filter is activated, high specificity but only fair sensitivity is yielded. Total error rates are more favorable with

  6. Error Analysis on the Estimation of Cumulative Infiltration in Soil Using Green and AMPT Model

    Directory of Open Access Journals (Sweden)

    Muhamad Askari

    2006-08-01

    Full Text Available Green and Ampt infiltration model is still useful for the infiltration process because of a clear physical basis of the model and of the existence of the model parameter values for a wide range of soil. The objective of thise study was to analyze error on the esimation of cumulative infiltration in sooil using Green and Ampt model and to design laboratory experiment in measuring cumulative infiltration. Parameter of the model was determined based on soil physical properties from laboratory experiment. Newton –Raphson method was esed to estimate wetting front during calculation using visual Basic for Application (VBA in MS Word. The result showed that  contributed the highest error in estimation of cumulative infiltration and was followed by K, H0, H1, and t respectively. It also showed that the calculated cumulative infiltration is always lower than both measured cumulative infiltration and volumetric soil water content.

  7. ERROR ESTIMATION AND UNAMBIGUOUS RECONSTRUCTION FOR CHINESE FIRST DUAL-CHANNEL SPACEBORNE SAR IMAGING

    Directory of Open Access Journals (Sweden)

    T. Jin

    2017-09-01

    Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS faced with conventional SAR. Error estimation and unambiguous reconstruction are two crucial techniques for obtaining high-quality imagery. This paper demonstrates the experimental results of the two techniques for Chinese first dualchannel spaceborne SAR imaging. The model of Chinese Gaofen-3 dual-channel mode is established and the mechanism of channel mismatches is first discussed. Particularly, we propose a digital beamforming (DBF process composed of the subspace-based error estimation algorithm and the reconstruction algorithm before imaging. The results exhibit the effective suppression of azimuth ambiguities with the proposed DBF process, and indicate the feasibility of this technique for future HRWS SAR systems.

  8. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  9. Approximate damped oscillatory solutions and error estimates for the perturbed Klein–Gordon equation

    International Nuclear Information System (INIS)

    Ye, Caier; Zhang, Weiguo

    2015-01-01

    Highlights: • Analyze the dynamical behavior of the planar dynamical system corresponding to the perturbed Klein–Gordon equation. • Present the relations between the properties of traveling wave solutions and the perturbation coefficient. • Obtain all explicit expressions of approximate damped oscillatory solutions. • Investigate error estimates between exact damped oscillatory solutions and the approximate solutions and give some numerical simulations. - Abstract: The influence of perturbation on traveling wave solutions of the perturbed Klein–Gordon equation is studied by applying the bifurcation method and qualitative theory of dynamical systems. All possible approximate damped oscillatory solutions for this equation are obtained by using undetermined coefficient method. Error estimates indicate that the approximate solutions are meaningful. The results of numerical simulations also establish our analysis

  10. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent...... statistical assumptions. More specifically, the method belongs to the class of methods relying on the statistical framework proposed in Ephraim and Malah’s original work [1]. The method is general in that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC’s), cepstral-mean subtracted (CMS......, as measured by MFCC mean-square error, the proposed method shows performance, which is identical to or better than other state-of-the-art methods. In terms of ASR performance, no statistical difference could be found between the proposed method and the state-of-the-art methods. We conclude that existing state...

  11. An information-guided channel-hopping scheme for block-fading channels with estimation errors

    KAUST Repository

    Yang, Yuli

    2010-12-01

    Information-guided channel-hopping technique employing multiple transmit antennas was previously proposed for supporting high data rate transmission over fading channels. This scheme achieves higher data rates than some mature schemes, such as the well-known cyclic transmit antenna selection and space-time block coding, by exploiting the independence character of multiple channels, which effectively results in having an additional information transmitting channel. Moreover, maximum likelihood decoding may be performed by simply decoupling the signals conveyed by the different mapping methods. In this paper, we investigate the achievable spectral efficiency of this scheme in the case of having channel estimation errors, with optimum pilot overhead for minimum meansquare error channel estimation, when transmitting over blockfading channels. Our numerical results further substantiate the robustness of the presented scheme, even with imperfect channel state information. ©2010 IEEE.

  12. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  13. Expected estimating equation using calibration data for generalized linear models with a mixture of Berkson and classical errors in covariates.

    Science.gov (United States)

    Tapsoba, Jean de Dieu; Lee, Shen-Ming; Wang, Ching-Yun

    2014-02-20

    Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors, and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. We investigated its finite-sample performance numerically. Our method is illustrated by an application to real data from an HIV vaccine study. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  15. Unconditional convergence and error estimates for bounded numerical solutions of the barotropic Navier-Stokes system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier –Stokes system Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract

  16. Goal Oriented Estimation of Errors due to Modal Reduction in Dynamics

    OpenAIRE

    Shetty, Sandeep; Okeke, Chukwudi Anthony

    2007-01-01

    The aim of this thesis is the estimation of errors due to reduction in modal superposition method. Mode superposition methods are used to calculate the dynamic response of linear systems. In modal superposition method, it becomes unnecessary to consider all the modes of a particular system. In generally only a few number of modes contribute significantly to the solution. The main aim of the study is to identify the significant modes required for good approximation. Finally, this study constit...

  17. Verification of functional a posteriori error estimates for obstacle problem in 2D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2014-01-01

    Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf

  18. Unconditional convergence and error estimates for bounded numerical solutions of the barotropic Navier-Stokes system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier–Stokes system Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract

  19. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  20. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    Directory of Open Access Journals (Sweden)

    Joaquin Ballesteros

    2016-11-01

    Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  1. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    Science.gov (United States)

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  2. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  3. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  4. Computational Package for Copolymerization Reactivity Ratio Estimation: Improved Access to the Error-in-Variables-Model

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2018-01-01

    Full Text Available The error-in-variables-model (EVM is the most statistically correct non-linear parameter estimation technique for reactivity ratio estimation. However, many polymer researchers are unaware of the advantages of EVM and therefore still choose to use rather erroneous or approximate methods. The procedure is straightforward but it is often avoided because it is seen as mathematically and computationally intensive. Therefore, the goal of this work is to make EVM more accessible to all researchers through a series of focused case studies. All analyses employ a MATLAB-based computational package for copolymerization reactivity ratio estimation. The basis of the package is previous work in our group over many years. This version is an improvement, as it ensures wider compatibility and enhanced flexibility with respect to copolymerization parameter estimation scenarios that can be considered.

  5. Error estimates for near-Real-Time Satellite Soil Moisture as Derived from the Land Parameter Retrieval Model

    NARCIS (Netherlands)

    Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.

    2011-01-01

    A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from

  6. Improved children's motor learning of the basketball free shooting pattern by associating subjective error estimation and extrinsic feedback.

    Science.gov (United States)

    Silva, Leandro de Carvalho da; Pereira-Monfredini, Carla Ferro; Teixeira, Luis Augusto

    2017-09-01

    This study aimed at assessing the interaction between subjective error estimation and frequency of extrinsic feedback in the learning of the basketball free shooting pattern by children. 10- to 12-year olds were assigned to 1 of 4 groups combining subjective error estimation and relative frequency of extrinsic feedback (33% × 100%). Analysis of performance was based on quality of movement pattern. Analysis showed superior learning of the group combining error estimation and 100% feedback frequency, both groups receiving feedback on 33% of trials achieved intermediate results, and the group combining no requirement of error estimation and 100% feedback frequency had the poorest learning. Our results show the benefit of subjective error estimation in association with high frequency of extrinsic feedback in children's motor learning of a sport motor pattern.

  7. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    Science.gov (United States)

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-04-16

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  8. On sharp estimates of the convergence of double Fourier-Bessel series

    Science.gov (United States)

    Abilov, V. A.; Abilova, F. V.; Kerimov, M. K.

    2017-11-01

    The problem of approximation of a differentiable function of two variables by partial sums of a double Fourier-Bessel series is considered. Sharp estimates of the rate of convergence of the double Fourier-Bessel series on the class of differentiable functions of two variables characterized by a generalized modulus of continuity are obtained. The proofs of four theorems on this issue, which can be directly applied to solving particular problems of mathematical physics, approximation theory, etc., are presented.

  9. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    Science.gov (United States)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  10. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  11. Estimating and comparing microbial diversity in the presence of sequencing errors

    Directory of Open Access Journals (Sweden)

    Chun-Huo Chiu

    2016-02-01

    Full Text Available Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0, Shannon diversity (q = 1, the exponential of Shannon entropy, and Simpson diversity (q = 2, the inverse of Simpson index. A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic are developed to compare microbial diversity for multiple communities. (1 A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample

  12. Characteristics and Error Modeling of GPM Satellite Rainfall Estimates over Different Regions of Brazil

    Science.gov (United States)

    Oliveira, R. A. J.; Vila, D. A.; Maggioni, V.; Morales, C. A.

    2015-12-01

    This study aims to investigate, over the different regions of Brazil, the error characteristics and uncertainties (random and systematic errors components) in satellite-based precipitation estimates by comparing the Goddard Profiling Algorithm (GPROF), through different sensors from GPM database (such as GMI, TMI, SSMI/S, AMSR2, MHS, among others), and Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithms. The analyses are made with other ground (S- and X-band dual polarization weather radar) and space (e.g., TRMM-PR and GPM-DPR [at Ku-band] active radars) based rainfall estimates as references at instantaneous timescales and respecting their temporal limitations. The Precipitation Uncertainties for Satellite Hydrology (PUSH) framework is used for the analysis and uncertainties characterization and error modeling. Specially, this study are focused on specific regions of Brazil, where the campaigns of the CHUVA project occurred (CHUVA/GoAmazon [IOP1 and 2] in Amazon and over southern Brazil where the S-band dual polarization radars (e.g., the FCTH radar) are located.

  13. Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

    Science.gov (United States)

    Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

    2013-01-01

    Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

  14. Evaluation of the sources of error in the linepack estimation of a natural gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Marco, Fabio Capelassi Gavazzi de [Transportadora Brasileira Gasoduto Bolivia-Brasil S.A. (TBG), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The intent of this work is to explore the behavior of the random error associated with determination of linepack in a complex natural gas pipeline based on the effect introduced by the uncertainty of the different variables involved. There are many parameters involved in the determination of the gas inventory in a transmission pipeline: geometrical (diameter, length and elevation profile), operational (pressure, temperature and gas composition), environmental (ambient / ground temperature) and those dependent on the modeling assumptions (compressibility factor and heat transfer coefficient). Due to the extent of a natural gas pipeline and the vast amount of sensor involved it is infeasible to determine analytically the magnitude of resulting uncertainty in the linepack, thus this problem has been addressed using Monte Carlo Method. The approach consists of introducing random errors in the values of pressure, temperature and gas gravity that are employed in the determination of the linepack and verify its impact. Additionally, the errors associated with three different modeling assumptions to estimate the linepack are explored. The results reveal that pressure is the most critical variable while the temperature is the less critical. In regard to the different methods to estimate the linepack, deviations around 1.6% were verified among the methods. (author)

  15. Improved estimations of low-degree coefficients using GPS displacements with reduced non-loading errors

    Science.gov (United States)

    Wei, Na; Shi, Chuang; Wang, Guangxing; Liu, Jingnan

    2018-02-01

    We investigate and try to reduce the impacts on low-degree estimates of non-loading errors, that is, aliasing of unmodeled loading and Global Positioning System (GPS) draconitic year errors, to improve the sensitivity of GPS observations to the loading mass. Three GPS data sets, ITRF2008-GPS residuals, ITRF2014-GPS residuals and Jet Propulsion Laboratory (JPL)'s residuals, are used and compared in this paper. Results show that the aliasing signals in GPS displacements is an important error source, especially for inferring geocentre motion. The two International Terrestrial Reference Frame (ITRF)-GPS residuals generated in a two-step combination based on Helmert transformation show more complex aliasing errors than JPL's residuals produced in precise point positions mode. The seasonal variations of geocentre motion derived from JPL thus perform the best among all three solutions, while the higher degree coefficients from the two ITRF-GPS solutions do better. Compared with ITRF2008-GPS residuals, the aliasing errors are indeed reduced, and geocentre motion/{{Δ }}T_{20}^C (degree-2 zonal coefficients in terms of surface mass density) are also much improved for ITRF2014-GPS residuals produced with a six-parameter transformation without scale parameter. Additional translation parameters should be included into ITRF2008-GPS residuals, or else {{Δ }}T_{20}^C cannot be correctly obtained. The draconitic errors pose another obstacle to accurately studying the seasonal variations of surface loading using GPS data. The draconitic harmonics (first, second and third) are well extracted from ITRF2014-derived {{Δ }}T_{20}^C and {{Δ }}T_{21}^S (degree-2 and order-1 sine coefficients), even if the time span is not long enough to independently separate the seasonal variations and draconitic harmonics. These errors account for an increase of about 10 per cent in the annual amplitude of ITRF2014-derived {{Δ }}T_{20}^C and {{Δ }}T_{21}^S. Removing the found draconitic errors

  16. Estimating the Population Mean by Using Stratified Double Extreme Ranked Set Sample

    OpenAIRE

    Mahmoud I. Syam; Kamarulzaman Ibrahim; Amer I. Al-Omari

    2015-01-01

    Stratified double extreme ranked set sampling (SDERSS) method is introduced and considered for estimating the population mean. The SDERSS is compared with the simple random sampling (SRS), stratified ranked set sampling (SRSS) and stratified simple set sampling (SSRS). It is shown that the SDERSS estimator is an unbiased of the population mean and more efficient than the estimators using SRS, SRSS and SSRS when the underlying distribution of the variable of interest is sy...

  17. On the population median estimation using quartile double ranked set sampling

    OpenAIRE

    Amer Ibrahim Al-Omari; Loai Mahmoud Al-Zubi; Ahmad Khazaleh

    2015-01-01

    In this article, quartile double ranked set sampling (QDRSS) method is considered for estimating the population median. The sample median based on QDRSS is suggested as an estimator of the population median. The QDRSS is compared with the simple random sampling (SRS), ranked set sampling (RSS) and quartile ranked set sampling (QRSS) methods. A real data set is used for illustration. It turns out that, for the symmetric distributions considered in this study, the QDRSS estimators are unbiased ...

  18. Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning

    Science.gov (United States)

    Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.

    2017-12-01

    Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the BTs consistently appears as a very important variable for classification, suggesting that unidentified cloud contamination still is one of the

  19. A modelling error approach for the estimation of optical absorption in the presence of anisotropies

    Energy Technology Data Exchange (ETDEWEB)

    Heino, Jenni [Helsinki University of Technology, Laboratory of Biomedical Engineering, PO Box 2200, FIN-02015 HUT (Finland); Somersalo, Erkki [Helsinki University of Technology, Institute of Mathematics, PO Box 1100, FIN-02015 HUT (Finland)

    2004-10-21

    Optical tomography is an emerging method for non-invasive imaging of human tissues using near-infrared light. Generally, the tissue is assumed isotropic, but this may not always be true. In this paper, we present a method for the estimation of optical absorption coefficient allowing the background to be anisotropic. To solve the forward problem, we model the light propagation in tissue using an anisotropic diffusion equation. The inverse problem consists of the estimation of the absorption coefficient based on boundary measurements. Generally, the background anisotropy cannot be assumed to be known. We treat the uncertainties in the background anisotropy parameter values as modelling error, and include this in our model and reconstruction. We present numerical examples based on simulated data. For reference, examples using an isotropic inversion scheme are also included. The estimates are qualitatively different for the two methods.

  20. Convergence, error estimation and adaptivity in non-elliptic coupled electro-mechanical problems

    Science.gov (United States)

    Zboiński, Grzegorz

    2018-01-01

    This paper presents the influence of the lack of ellipticity property on the solution convergence of the coupled electro-mechanical problems. This influence consists in the non-monotonic convergence which can hardly be described analytically. We recall our previous unpublished research where we demonstrate that the non-monotonicity depends very much on the energy level of the two component parts of the energy related to the coupled fields of mechanical and electric character. We further investigate the influence of this non-monotonic character of the convergence on the error estimation via equilibrated residual method. We also assess the influence of such convergence on the three-step error-controlled adaptive algorithms. We indicate the methods of practical overcoming the mentioned problems related to the lack of ellipticity.

  1. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    2009-01-01

    that alleviates these simplifying assumptions concerning (i) the number of alternatives, (ii) the representation of choice set formation, and (iii) the error structure. The proposed semi-compensatory model represents a sequence of choice set formation based on the conjunctive heuristic with correlated thresholds......, and utility-based choice accommodating alternatively nested substitution patterns across the alternatives and random taste variation across the population. The proposed model is applied to off-campus rental apartment choice of students. Results show (i) the estimated model for a universal realm of 200...... alternatives and 41 choice sets, (ii) the threshold representation as a function of individual characteristics, and (iii) the feasibility and importance of introducing a flexible error structure into semi-compensatory models....

  2. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  3. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  4. Estimation of the combinatorial background in dimuon spectra using recombination of muons, and associated error

    International Nuclear Information System (INIS)

    Constantinescu, S.; Dita, S.; Jouan, D.

    1996-01-01

    The true muon pairs production is experimentally associated with a background of random combinations originating mainly from uncorrelated decays of mesons π and K into muons. Several methods for determining the background have been proposed in the past years. The non trivial errors that has to be associated with the 're-combinatorial method' which is the principal focus of this study are estimated. For the sake of comparison the other method currently used in the NA38 experiment, which derives from the same framework, will also be considered. (K.A.)

  5. An Automatic Quadrature Schemes and Error Estimates for Semibounded Weighted Hadamard Type Hypersingular Integrals

    Directory of Open Access Journals (Sweden)

    Sirajo Lawan Bichi

    2014-01-01

    Full Text Available The approximate solutions for the semibounded Hadamard type hypersingular integrals (HSIs for smooth density function are investigated. The automatic quadrature schemes (AQSs are constructed by approximating the density function using the third and fourth kinds of Chebyshev polynomials. Error estimates for the semibounded solutions are obtained in the class of h(t∈CN,α[-1,1]. Numerical results for the obtained quadrature schemes revealed that the proposed methods are highly accurate when the density function h (t is any polynomial or rational functions. The results are in line with the theoretical findings.

  6. Use and Subtleties of Saddlepoint Approximation for Minimum Mean-Square Error Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas; Nuttall, Albert H.; Hansen, Lars Kai

    2008-01-01

    An integral representation for the minimum mean-square error (MMSE) estimator for a random variable in an observation model consisting of a linear combination of two random variables is derived. The derivation is based on the moment-generating functions for the random variables in the observation...... integral representation. However, the examples also demonstrate that when two saddle points are close or coalesce, then saddle-point approximation based on isolated saddle points is not valid. A saddle-point approximation based on two close or coalesced saddle points is derived and in the examples...

  7. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  8. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    Science.gov (United States)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  9. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Directory of Open Access Journals (Sweden)

    Marco A

    2006-01-01

    Full Text Available Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.. In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS, even when nearly half the measures suffered from NLOS or other coarse errors.

  10. Estimation of the sampling interval error for LED measurement with a goniophotometer

    Science.gov (United States)

    Zhao, Weiqiang; Liu, Hui; Liu, Jian

    2013-06-01

    Using a goniophotometer to implant a total luminous flux measurement, an error comes from the sampling interval, especially in the situation for LED measurement. In this work, we use computer calculations to estimate the effect of sampling interval on the measuring the total luminous flux for four typical kinds of LEDs, whose spatial distributions of luminous intensity is similar to those LEDs shown in CIE 127 paper. Four basic kinds of mathematical functions are selected to simulate the distribution curves. Axial symmetric type LED and non-axial symmetric type LED are both take amount of. We consider polar angle sampling interval of 0.5°, 1°, 2°, and 5° respectively in one rotation for axial symmetric type, and consider azimuth angle sampling interval of 18°, 15°, 12°, 10° and 5° respectively for non-axial symmetric type. We noted that the error is strongly related to spatial distribution. However, for common LED light sources the calculation results show that a usage of polar angle sampling interval of 2° and azimuth angle sampling interval of 15° is recommended. The systematic error of sampling interval for a goniophotometer can be controlled at the level of 0.3%. For high precise level, the usage of polar angle sampling interval of 1° and azimuth angle sampling interval of 10° should be used.

  11. MI Double Feature: Multiple Imputation to Address Nonresponse and Rounding Errors in Income Questions

    Directory of Open Access Journals (Sweden)

    Joerg Drechsler

    2015-04-01

    Full Text Available Obtaining reliable income information in surveys is difficult for two reasons. On the one hand, many survey respondents consider income to be sensitive information and thus are reluctant to answer questions regarding their income. If those survey participants that do not provide information on their income are systematically different from the respondents - and there is ample of research indicating that they are - results based only on the observed income values will be misleading. On the other hand, respondents tend to round their income. Especially this second source of error is usually ignored when analyzing the income information.In a recent paper, Drechsler and Kiesl (2014 illustrated that inferences based on the collected information can be biased if the rounding is ignored and suggested a multiple imputation strategy to account for the rounding in reported income. In this paper we extend their approach to also address the nonresponse problem. We illustrate the approach using the household income variable from the German panel study "Labor Market and Social Security''.

  12. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2014-01-01

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...

  13. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...

  14. Capacity estimation and verification of quantum channels with arbitrarily correlated errors.

    Science.gov (United States)

    Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie

    2018-01-02

    The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.

  15. Laboratory measurement error in external dose estimates and its effects on dose-response analyses of Hanford worker mortality data

    International Nuclear Information System (INIS)

    Gilbert, E.S.; Fix, J.J.

    1996-08-01

    This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation

  16. Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2009-05-01

    Full Text Available The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured.We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence.If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant.The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.

  17. The Impact of Atmospheric Modeling Errors on GRACE Estimates of Mass Loss in Greenland and Antarctica

    Science.gov (United States)

    Hardy, Ryan A.; Nerem, R. Steven; Wiese, David N.

    2017-12-01

    Systematic errors in Gravity Recovery and Climate Experiment (GRACE) monthly mass estimates over the Greenland and Antarctic ice sheets can originate from low-frequency biases in the European Centre for Medium-Range Weather Forecasts (ECMWF) Operational Analysis model, the atmospheric component of the Atmospheric and Ocean Dealising Level-1B (AOD1B) product used to forward model atmospheric and ocean gravity signals in GRACE processing. These biases are revealed in differences in surface pressure between the ECMWF Operational Analysis model, state-of-the-art reanalyses, and in situ surface pressure measurements. While some of these errors are attributable to well-understood discrete model changes and have published corrections, we examine errors these corrections do not address. We compare multiple models and in situ data in Antarctica and Greenland to determine which models have the most skill relative to monthly averages of the dealiasing model. We also evaluate linear combinations of these models and synthetic pressure fields generated from direct interpolation of pressure observations. These models consistently reveal drifts in the dealiasing model that cause the acceleration of Antarctica's mass loss between April 2002 and August 2016 to be underestimated by approximately 4 Gt yr-2. We find similar results after attempting to solve the inverse problem, recovering pressure biases directly from the GRACE Jet Propulsion Laboratory RL05.1 M mascon solutions. Over Greenland, we find a 2 Gt yr-1 bias in mass trend. While our analysis focuses on errors in Release 05 of AOD1B, we also evaluate the new AOD1B RL06 product. We find that this new product mitigates some of the aforementioned biases.

  18. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  19. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  20. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  1. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    Science.gov (United States)

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. The estimation of calibration equations for variables with heteroscedastic measurement errors.

    Science.gov (United States)

    Tian, Lu; Durazo-Arvizu, Ramón A; Myers, Gary; Brooks, Steve; Sarafin, Kurtis; Sempos, Christopher T

    2014-11-10

    In clinical chemistry and medical research, there is often a need to calibrate the values obtained from an old or discontinued laboratory procedure to the values obtained from a new or currently used laboratory method. The objective of the calibration study is to identify a transformation that can be used to convert the test values of one laboratory measurement procedure into the values that would be obtained using another measurement procedure. However, in the presence of heteroscedastic measurement error, there is no good statistical method available for estimating the transformation. In this paper, we propose a set of statistical methods for a calibration study when the magnitude of the measurement error is proportional to the underlying true level. The corresponding sample size estimation method for conducting a calibration study is discussed as well. The proposed new method is theoretically justified and evaluated for its finite sample properties via an extensive numerical study. Two examples based on real data are used to illustrate the procedure. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data...

  4. Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise

    International Nuclear Information System (INIS)

    Kadrmas, D.J.; Huesman, R.H.

    1999-01-01

    Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99m Tc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)

  5. Decay estimate of global solutions to the generalized double dispersion model in Morrey spaces

    Science.gov (United States)

    Wang, Yu-Zhu; Gu, Liuxin; Wang, Yinxia

    2017-08-01

    In this paper, we investigate the initial value problem for the generalized double dispersion model in Morrey spaces. Based on the decay properties of the solution operator in Morrey spaces, global existence and decay estimates of solutions are proved by Banach fixed point theorem.

  6. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R

    2010-01-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  7. On the Liu and almost unbiased Liu estimators in the presence of multicollinearity with heteroscedastic or correlated errors

    Directory of Open Access Journals (Sweden)

    Mustafa I. Alheety

    2009-11-01

    Full Text Available This paper introduces a new biased estimator, namely, almost unbiased Liu estimator (AULE of β for the multiple linear regression model with heteroscedastics and/or correlated errors and suffers from the problem of multicollinearity. The properties of the proposed estimator is discussed and the performance over the generalized least squares (GLS estimator, ordinary ridge regression (ORR estimator (Trenkler, 1984, and Liu estimator (LE (Kaçiranlnar, 2003 in terms of matrix mean square error criterion are investigated. The optimal values of d for Liu and almost unbiased Liu estimators have been obtained. Finally, a simulation study has been conducted which indicated that under certain conditions on d, the proposed estimator performed well compared to GLS, ORR and LE estimators.

  8. A possible solution for the problem of estimating the error structure of global soil moisture data sets

    Science.gov (United States)

    Scipal, K.; Holmes, T.; de Jeu, R.; Naeimi, V.; Wagner, W.

    2008-12-01

    In the last few years, research made significant progress towards operational soil moisture remote sensing which lead to the availability of several global data sets. For an optimal use of these data, an accurate estimation of the error structure is an important condition. To solve for the validation problem we introduce the triple collocation error estimation technique. The triple collocation technique is a powerful tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three independent data sources. We evaluate the method by applying it to a passive microwave (TRMM radiometer) derived, an active microwave (ERS-2 scatterometer) derived and a modeled (ERA-Interim reanalysis) soil moisture data sets. The results suggest that the method provides realistic error estimates.

  9. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    Science.gov (United States)

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  10. The Importance of Tree Height in Estimating Individual Tree Biomass While Considering Errors in Measurements and Allometric Models

    OpenAIRE

    Phalla, Thuch; Ota, Tetsuji; Mizoue, Nobuya; Kajisa, Tsuyoshi; Yoshida, Shigejiro; Vuthy, Ma; Heng, Sokh

    2018-01-01

    This study evaluated the uncertainty of individual tree biomass estimated by allometric models by both including and excluding tree height independently. Using two independent sets of measurements on the same trees, the errors in the measurement of diameter at breast height and tree height were quantified, and the uncertainty of individual tree biomass estimation caused by errors in measurement was calculated. For both allometric models, the uncertainties of the individual tree biomass estima...

  11. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    Science.gov (United States)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge–Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution

  12. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  13. Intermediate-mass-ratio inspirals in the Einstein Telescope. II. Parameter estimation errors

    International Nuclear Information System (INIS)

    Huerta, E. A.; Gair, Jonathan R.

    2011-01-01

    We explore the precision with which the Einstein Telescope will be able to measure the parameters of intermediate-mass-ratio inspirals, i.e., the inspirals of stellar mass compact objects into intermediate-mass black holes (IMBHs). We calculate the parameter estimation errors using the Fisher Matrix formalism and present results of Monte Carlo simulations of these errors over choices for the extrinsic parameters of the source. These results are obtained using two different models for the gravitational waveform which were introduced in paper I of this series. These two waveform models include the inspiral, merger, and ringdown phases in a consistent way. One of the models, based on the transition scheme of Ori and Thorne [A. Ori and K. S. Thorne, Phys. Rev. D 62, 124022 (2000).], is valid for IMBHs of arbitrary spin; whereas, the second model, based on the effective-one-body approach, has been developed to cross-check our results in the nonspinning limit. In paper I of this series, we demonstrated the excellent agreement in both phase and amplitude between these two models for nonspinning black holes, and that their predictions for signal-to-noise ratios are consistent to within 10%. We now use these waveform models to estimate parameter estimation errors for binary systems with masses 1.4M · +100M · , 10M · +100M · , 1.4M · +500M · , and 10M · +500M · and various choices for the spin of the central IMBH. Assuming a detector network of three Einstein Telescopes, the analysis shows that for a 10M · compact object inspiralling into a 100M · IMBH with spin q=0.3, detected with a signal-to-noise ratio of 30, we should be able to determine the compact object and IMBH masses, and the IMBH spin magnitude to fractional accuracies of ∼10 -3 , ∼10 -3.5 , and ∼10 -3 , respectively. We also expect to determine the location of the source in the sky and the luminosity distance to within ∼0.003 steradians and ∼10%, respectively. We also compute results for

  14. Error estimation, validity and best practice guidelines for quantifying coalescence frequency during emulsification using the step-down technique

    Directory of Open Access Journals (Sweden)

    Andreas Håkansson

    2017-07-01

    This contribution derives error estimates for three non-idealities present in every step-down experiment: i limited sampling rate, ii non-instantaneous step-down and iii residual fragmentation after the step. It is concluded that all three factors give rise to systematic errors in estimating coalescence rate. However, by carefully choosing experimental settings, the errors can be kept small. The method, thus, remains suitable for many conditions. Best practice guidelines for applying the method are given, both generally, and more specifically for stirred tank oil-in-water emulsification.

  15. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  16. A classification scheme of erroneous behaviors for human error probability estimations based on simulator data

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea

    2017-01-01

    Because it has been indicated that empirical data supporting the estimates used in human reliability analysis (HRA) is insufficient, several databases have been constructed recently. To generate quantitative estimates from human reliability data, it is important to appropriately sort the erroneous behaviors found in the reliability data. Therefore, this paper proposes a scheme to classify the erroneous behaviors identified by the HuREX (Human Reliability data Extraction) framework through a review of the relevant literature. A case study of the human error probability (HEP) calculations is conducted to verify that the proposed scheme can be successfully implemented for the categorization of the erroneous behaviors and to assess whether the scheme is useful for the HEP quantification purposes. Although continuously accumulating and analyzing simulator data is desirable to secure more reliable HEPs, the resulting HEPs were insightful in several important ways with regard to human reliability in off-normal conditions. From the findings of the literature review and the case study, the potential and limitations of the proposed method are discussed. - Highlights: • A taxonomy of erroneous behaviors is proposed to estimate HEPs from a database. • The cognitive models, procedures, HRA methods, and HRA databases were reviewed. • HEPs for several types of erroneous behaviors are calculated as a case study.

  17. Threshold-based detection for amplify-and-forward cooperative communication systems with channel estimation error

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-09-01

    Efficient receiver designs for cooperative communication systems are becoming increasingly important. In previous work, cooperative networks communicated with the use of $L$ relays. As the receiver is constrained, it can only process $U$ out of $L$ relays. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this paper, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal $U-{opt}$. Furthermore, this receiver provides the freedom to choose $U ≤ U-{opt}$ for each frame depending upon the tolerable difference allowed for mean square error (MSE). Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings without affecting the BER performance of the system. Furthermore, in this paper the effect of channel estimation errors is investigated on the MSE performance of the amplify-and-forward (AF) cooperative relaying system.

  18. Impact of mixed modes on measurement errors and estimates of change in panel data

    Directory of Open Access Journals (Sweden)

    Alexandru Cernat

    2015-07-01

    Full Text Available Mixed mode designs are receiving increased interest as a possible solution for saving costs in panel surveys, although the lasting effects on data quality are unknown. To better understand the effects of mixed mode designs on panel data we will examine its impact on random and systematic error and on estimates of change. The SF12, a health scale, in the Understanding Society Innovation Panel is used for the analysis. Results indicate that only one variable out of 12 has systematic differences due to the mixed mode design. Also, four of the 12 items overestimate variance of change in time in the mixed mode design. We conclude that using a mixed mode approach leads to minor measurement differences but it can result in the overestimation of individual change compared to a single mode design.

  19. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  20. Estimating errors in fractional cloud cover obtained with infrared threshold methods

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Fu-Lung; Coakley, J.A. Jr. (Oregon State Univ., Corvallis (United States))

    1993-05-20

    The authors address the question of detecting cloud coverage from satellite imagery. The International Satellite Cloud Climatology Project (ISCCP), and NIMBUS-7 have constructed cloud climatologies, but they differ substantially in the global mean cloud cover. Here the authors address problems in the application of threshold methods to the infrared detection of cloud cover. They look in particular at single-layered cloud cover, and compare the threshold IR detection method with a spatial coherence method. One of the problems is that the pixel size of satellite imagery, namely 4-8km on a side is not particularly small compared to cloud features, or even breaks in cloud cover, and there can be severe error in estimating cloud cover because of this difficulty.

  1. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  2. Estimation of surgical tool-tip tracking error distribution in coordinate reference frame involving pivot calibration uncertainty.

    Science.gov (United States)

    Min, Zhe; Ren, Hongliang; Meng, Max Q-H

    2017-10-01

    Accurate understanding of surgical tool-tip tracking error is important for decision making in image-guided surgery. In this Letter, the authors present a novel method to estimate/model surgical tool-tip tracking error in which they take pivot calibration uncertainty into consideration. First, a new type of error that is referred to as total target registration error (TTRE) is formally defined in a single-rigid registration. Target localisation error (TLE) in two spaces to be registered is considered in proposed TTRE formulation. With first-order approximation in fiducial localisation error (FLE) or TLE magnitude, TTRE statistics (mean, covariance matrix and root-mean-square (RMS)) are then derived. Second, surgical tool-tip tracking error in optical tracking system (OTS) frame is formulated using TTRE when pivot calibration uncertainty is considered. Finally, TTRE statistics of tool-tip in OTS frame are then propagated relative to a coordinate reference frame (CRF) rigid-body. Monte Carlo simulations are conducted to validate the proposed error model. The percentage passing statistical tests that there is no difference between simulated and theoretical mean and covariance matrix of tool-tip tracking error in CRF space is more than 90% in all test cases. The RMS percentage difference between simulated and theoretical tool-tip tracking error in CRF space is within 5% in all test cases.

  3. Effect of the Absorbed Photosynthetically Active Radiation Estimation Error on Net Primary Production Estimation - A Study with MODIS FPAR and TOMS Ultraviolet Reflective Products

    International Nuclear Information System (INIS)

    Kobayashi, H.; Matsunaga, T.; Hoyano, A.

    2002-01-01

    Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small

  4. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters-Part II: Observability/Estimability Analysis.

    Science.gov (United States)

    Silva, Felipe O; Hemerly, Elder M; Leite Filho, Waldemar C

    2017-02-23

    This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.

  5. An error estimation for the implicit Euler method recommended for use in the RELAP4 family of codes

    International Nuclear Information System (INIS)

    Golos, S.

    1989-01-01

    A simple estimation of the absolute value of the error in the dependence of the step number performed for the implicit (backward) Euler method has been derived for the case of a single ordinary differential equation (ODE). This estimation distinctly shows the way and the degree to which the implicit Euler method (recommended in user guides for the RELAP4 family of codes) can give more inaccurate results than the explicit (forward) method. The short and simple reasoning presented should be treated as an indication of the problem. Error estimation for a general system of ODEs is an extremely difficult and complex task, and it is still not completely solved

  6. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  7. Using Satellite Error Modeling to Improve GPM-Level 3 Rainfall Estimates over the Central Amazon Region

    Directory of Open Access Journals (Sweden)

    Rômulo Oliveira

    2018-02-01

    Full Text Available This study aims to assess the characteristics and uncertainty of Integrated Multisatellite Retrievals for Global Precipitation Measurement (GPM (IMERG Level 3 rainfall estimates and to improve those estimates using an error model over the central Amazon region. The S-band Amazon Protection National System (SIPAM radar is used as reference and the Precipitation Uncertainties for Satellite Hydrology (PUSH framework is adopted to characterize uncertainties associated with the satellite precipitation product. PUSH is calibrated and validated for the study region and takes into account factors like seasonality and surface type (i.e., land and river. Results demonstrated that the PUSH model is suitable for characterizing errors in the IMERG algorithm when compared with S-band SIPAM radar estimates. PUSH could efficiently predict the satellite rainfall error distribution in terms of spatial and intensity distribution. However, an underestimation (overestimation of light satellite rain rates was observed during the dry (wet period, mainly over rivers. Although the estimated error showed a lower standard deviation than the observed error, the correlation between satellite and radar rainfall was high and the systematic error was well captured along the Negro, Solimões, and Amazon rivers, especially during the wet season.

  8. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans

    Science.gov (United States)

    Slatkin, Montgomery

    2016-01-01

    When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s). Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters—including drift times and admixture rates—for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African). The method is implemented in a C++ program called ‘Demographic Inference with Contamination and Error’ (DICE). We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X), we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%. PMID:27049965

  9. Evaluation of Error in IMERG Precipitation Estimates under Different Topographic Conditions and Temporal Scales over Mexico

    Directory of Open Access Journals (Sweden)

    Yandy G. Mayor

    2017-05-01

    Full Text Available This study evaluates the precipitation product of the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG over the Mexican region during the period between April 2014 and October 2015 using three different time scales for cumulative precipitation (hourly, daily and seasonal. Also, the IMERG data have been analyzed as a function of elevation given the rain gauges from the automatic meteorological stations network, located within the area of study, which are used as a reference. In the present study, continuous and categorical statistics are used to evaluate IMERG. It was found that IMERG showed better performance at the daily and seasonal time scale resolutions. While hourly precipitation estimates reached a mean correlation coefficient of 0.35, the daily and seasonal precipitation estimates achieved correlations over 0.51. In addition, the IMERG precipitation product was able to reproduce the diurnal and daily cycles of the average precipitation with a trend towards overestimating rain gauges. However, extreme precipitation events were highly underestimated, as shown by relative biases of −61% and −46% for the hourly and daily precipitation analysis, respectively. It was also found that IMERG tends to improve precipitation detection and to decrease magnitude errors over the higher terrain elevations of Mexico.

  10. Contaminant point source localization error estimates as functions of data quantity and model quality.

    Science.gov (United States)

    Hansen, Scott K; Vesselinov, Velimir V

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited. Copyright © 2016. Published by Elsevier B.V.

  11. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  12. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods

    Science.gov (United States)

    Assous, Franck; Chaskalovic, Joël

    2013-03-01

    In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

  13. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  14. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  15. The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...

  16. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  17. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  18. A Posteriori Error Estimates with Computable Upper Bound for the Nonconforming Rotated Q1 Finite Element Approximation of the Eigenvalue Problems

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2014-01-01

    discusses the nonconforming rotated Q1 finite element computable upper bound a posteriori error estimate of the boundary value problem established by M. Ainsworth and obtains efficient computable upper bound a posteriori error indicators for the eigenvalue problem associated with the boundary value problem. We extend the a posteriori error estimate to the Steklov eigenvalue problem and also derive efficient computable upper bound a posteriori error indicators. Finally, through numerical experiments, we verify the validity of the a posteriori error estimate of the boundary value problem; meanwhile, the numerical results show that the a posteriori error indicators of the eigenvalue problem and the Steklov eigenvalue problem are effective.

  19. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  20. Age estimation in forensic anthropology: quantification of observer error in phase versus component-based methods.

    Science.gov (United States)

    Shirley, Natalie R; Ramirez Montes, Paula Andrea

    2015-01-01

    The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.

  1. Ionospheric errors compensation for ground deformation estimation with new generation SAR

    Science.gov (United States)

    Gomba, Giorgio; De Zan, Francesco; Rodriguez Gonzalez, Fernando

    2017-04-01

    Synthetic aperture radar (SAR) and interferometric SAR (InSAR) measurements are disturbed by the propagation velocity changes of microwaves that are caused by the high density of free electrons in the ionosphere. Most affected are low-frequency (L- or P-band) radars, as the recently launched ALOS-2 and the future Tandem-L and NISAR, although higher frequency (C- or X-band) systems, as the recently launched Sentinel-1, are not immune. Since the ionosphere is an obstacle to increasing the precision of new generation SAR systems needed to remotely measure the Earth's dynamic processes as for example ground deformation, it is necessary to estimate and compensate ionospheric propagation delays in SAR signals. In this work we discuss about the influence of the ionosphere on interferograms and the possible correction methods with relative accuracies. Consequently, the effect of ionospheric induced errors on ground deformation measurements prior and after ionosphere compensation will be analyzed. Examples will be presented of corrupted measurements of earthquakes and fault motion along with the corrected results using different methods.

  2. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  3. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  4. On the population median estimation using quartile double ranked set sampling

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2015-12-01

    Full Text Available In this article, quartile double ranked set sampling (QDRSS method is considered for estimating the population median. The sample median based on QDRSS is suggested as an estimator of the population median. The QDRSS is compared with the simple random sampling (SRS, ranked set sampling (RSS and quartile ranked set sampling (QRSS methods. A real data set is used for illustration. It turns out that, for the symmetric distributions considered in this study, the QDRSS estimators are unbiased of the population median and are more than their counterparts using SRS, RSS and QRSS based on the same sample number of measured units. For asymmetric distributions, QDRSS is biased and it is more efficient than SRS, QRSS for all sample size m while it is more efficient than RSS if m>4 .

  5. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  6. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  7. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    Science.gov (United States)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors

  8. National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?

    Science.gov (United States)

    Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.

    2010-01-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…

  9. A priori error estimates for an hp-version of the discontinuous Galerkin method for hyperbolic conservation laws

    Science.gov (United States)

    Bey, Kim S.; Oden, J. Tinsley

    1993-01-01

    A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.

  10. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    Science.gov (United States)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  11. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  12. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  13. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...

  14. Estimating Error in Using Ambient PM2.5Concentrations as Proxies for Personal Exposures: A Review

    Science.gov (United States)

    Several methods have been used to account for measurement error inherent in using the ambient concentration of particulate matter 2.5, ug/m,3) as a proxy for personal exposure. Common features of such methods are their reliance on the estimated ...

  15. An error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle

    Energy Technology Data Exchange (ETDEWEB)

    Hauser, Eliete Biasotto [Faculty of Mathematics, PUCRS Av Ipiranga 6681, Building 15, Porto Alegre - RS 90619-900 (Brazil)]. E-mail: eliete@pucrs.br; Pazos, Ruben Panta [Department of Mathematics, UNISC Av Independencia, 2293, room 1301, Santa Cruz do Sul - RS 96815-900 (Brazil)]. E-mail: rpp@impa.br; Tullio de Vilhena, Marco [Graduate Program in Applied Mathematics, UFRGS Av Bento Goncalves 9500, Building 43-111, Porto Alegre - RS 91509-900 (Brazil)]. E-mail: vilhena@mat.ufrgs.br

    2005-07-15

    In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle. For such we present an efficient algorithm, called LTS {sub N} 2D-Diag solution for Cartesian geometry.

  16. An Optimal Error Estimates of H1-Galerkin Expanded Mixed Finite Element Methods for Nonlinear Viscoelasticity-Type Equation

    Directory of Open Access Journals (Sweden)

    Haitao Che

    2011-01-01

    Full Text Available We investigate a H1-Galerkin mixed finite element method for nonlinear viscoelasticity equations based on H1-Galerkin method and expanded mixed element method. The existence and uniqueness of solutions to the numerical scheme are proved. A priori error estimation is derived for the unknown function, the gradient function, and the flux.

  17. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  18. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  19. Application of the error propagation theory in estimates of static formation temperatures in geothermal and petroleum boreholes

    International Nuclear Information System (INIS)

    Verma, Surendra P.; Andaverde, Jorge; Santoyo, E.

    2006-01-01

    We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with

  20. Use of the breeding technique to estimate the structure of the analysis 'errors of the day'

    Directory of Open Access Journals (Sweden)

    M. Corazza

    2003-01-01

    Full Text Available A 3D-variational data assimilation scheme for a quasi-geostrophic channel model (Morss, 1998 is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. Case studies using different observational densities are considered to compare the evolution of the Bred Vectors to the spatial structure of the background error. In addition, the bred vector dimension (BV-dimension, defined by Patil et al. (2001 is applied to the bred vectors. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms (enstrophy and streamfunction considered in this paper. When 10 surrogate bred vectors (corresponding to different days from that of the background error are used to describe the local patterns of the background error, the explained variance is quite high, about 85-88%, indicating that the statistical average properties of the bred vectors represent well those of the background error. However, a subspace of 10 bred vectors corresponding to the time of the background error increased the percentage of explained variance to 96-98%, with the largest percentage when the background errors are large. These results suggest that a statistical basis of bred vectors collected over time can be used to create an effective constant background error covariance for data assimilation with 3D-Var. Including the "errors of the day" through the use of bred vectors corresponding to the background forecast time can bring an additional significant improvement.

  1. DC Link Current Estimation in Wind-Double Feed Induction Generator Power Conditioning System

    Directory of Open Access Journals (Sweden)

    MARIAN GAICEANU

    2010-12-01

    Full Text Available In this paper the implementation of the DC link current estimator in power conditioning system of the variable speed wind turbine is shown. The wind turbine is connected to double feed induction generator (DFIG. The variable electrical energy parameters delivered by DFIG are fitted with the electrical grid parameters through back-to-back power converter. The bidirectional AC-AC power converter covers a wide speed range from subsynchronous to supersynchronous speeds. The modern control of back-to-back power converter involves power balance concept, therefore its load power should be known in any instant. By using the power balance control, the DC link voltage variation at the load changes can be reduced. In this paper the load power is estimated from the dc link, indirectly, through a second order DC link current estimator. The load current estimator is based on the DC link voltage and on the dc link input current of the rotor side converter. This method presents certain advantages instead of using measured method, which requires a low pass filter: no time delay, the feedforward current component has no ripple, no additional hardware, and more fast control response. Through the numerical simulation the performances of the proposed DC link output current estimator scheme are demonstrated.

  2. Multivariate analysis for the estimation of target localization errors in fiducial marker-based radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2016-04-15

    Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE

  3. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  4. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model

    Directory of Open Access Journals (Sweden)

    Jorge Alonso-Carné

    2013-11-01

    Full Text Available The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODISderived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 ºC. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44º N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times. We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.

  5. Estimating angle-dependent systematic error and measurement uncertainty for a conoscopic holography measurement system

    Science.gov (United States)

    Paviotti, Anna; Carmignato, Simone; Voltan, Alessandro; Laurenti, Nicola; Cortelazzo, Guido M.

    2009-01-01

    The aim of this study is to assess angle-dependent systematic errors and measurement uncertainties for a conoscopic holography laser sensor mounted on a Coordinate Measuring Machine (CMM). The main contribution of our work is the definition of a methodology for the derivation of point-sensitive systematic and random errors, which must be determined in order to evaluate the accuracy of the measuring system. An ad hoc three dimensional artefact has been built for the task. The experimental test has been designed so as to isolate the effects of angular variations from those of other influence quantities that might affect the measurement result. We have found the best measurand to assess angle-dependent errors, and found some preliminary results on the expression of the systematic error and measurement uncertainty as a function of the zenith angle for the chosen measurement system and sample material.

  6. Doubling inequalities for anisotropic plate equations and applications to size estimates of inclusions

    International Nuclear Information System (INIS)

    Di Cristo, M; Lin, C-L; Morassi, A; Rosset, E; Vessella, S; Wang, J-N

    2013-01-01

    We prove the upper and lower estimates of the area of an unknown elastic inclusion in a thin plate by one boundary measurement. The plate is made of non-homogeneous linearly elastic material belonging to a general class of anisotropy and the domain of the inclusion is a measurable subset of the plate. The size estimates are expressed in terms of the work exerted by a couple field applied at the boundary and of the induced transversal displacement and its normal derivative taken at the boundary of the plate. The main new mathematical tool is a doubling inequality for solutions to fourth-order elliptic equations whose principal part P(x, D) is the product of two second-order elliptic operators P 1 (x, D), P 2 (x, D) such that P 1 (0, D) = P 2 (0, D). The proof of the doubling inequality is based on the Carleman method, a sharp three-spheres inequality and a bootstrapping argument. (paper)

  7. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    Science.gov (United States)

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  8. Correction of Sampling Errors in Ocean Surface Cross-Sectional Estimates from Nadir-Looking Weather Radar

    Science.gov (United States)

    Caylor, I. Jeff; Meneghini, R.; Miller, L. S.; Heymsfield, G. M.

    1997-01-01

    The return from the ocean surface has a number of uses for airborne meteorological radar. The normalized surface cross section has been used for radar system calibration, estimation of surface winds, and in algorithms for estimating the path-integrated attenuation in rain. However, meteorological radars are normally optimized for observation of distributed targets that fill the resolution volume, and so a point target such as the surface can be poorly sampled, particularly at near-nadir look angles. Sampling the nadir surface return at an insufficient rate results in a negative bias of the estimated cross section. This error is found to be as large as 4 dB using observations from a high-altitude airborne radar. An algorithm for mitigating the error is developed that is based upon the shape of the surface echo and uses the returned signal at the three range gates nearest the peak surface echo.

  9. Estimating recurrence and incidence of preterm birth subject to measurement error in gestational age: A hidden Markov modeling approach.

    Science.gov (United States)

    Albert, Paul S

    2018-02-21

    Prediction of preterm birth as well as characterizing the etiological factors affecting both the recurrence and incidence of preterm birth (defined as gestational age at birth ≤ 37 wk) are important problems in obstetrics. The National Institute of Child Health and Human Development (NICHD) consecutive pregnancy study recently examined this question by collecting data on a cohort of women with at least 2 pregnancies over a fixed time interval. Unfortunately, measurement error due to the dating of conception may induce sizable error in computing gestational age at birth. This article proposes a flexible approach that accounts for measurement error in gestational age when making inference. The proposed approach is a hidden Markov model that accounts for measurement error in gestational age by exploiting the relationship between gestational age at birth and birth weight. We initially model the measurement error as being normally distributed, followed by a mixture of normals that has been proposed on the basis of biological considerations. We examine the asymptotic bias of the proposed approach when measurement error is ignored and also compare the efficiency of this approach to a simpler hidden Markov model formulation where only gestational age and not birth weight is incorporated. The proposed model is compared with alternative models for estimating important covariate effects on the risk of subsequent preterm birth using a unique set of data from the NICHD consecutive pregnancy study. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  10. On the use of robust estimators for standard errors in the presence of clustering when clustering membership is misspecified.

    Science.gov (United States)

    Desai, Manisha; Bryson, Susan W; Robinson, Thomas

    2013-03-01

    This paper examines the implications of using robust estimators (REs) of standard errors in the presence of clustering when cluster membership is unclear as may commonly occur in clustered randomized trials. For example, in such trials, cluster membership may not be recorded for one or more treatment arms and/or cluster membership may be dynamic. When clusters are well defined, REs have properties that are robust to misspecification of the correlation structure. To examine whether results were sensitive to assumptions about the clustering membership, we conducted simulation studies for a two-arm clinical trial, where the number of clusters, the intracluster correlation (ICC), and the sample size varied. REs of standard errors that incorrectly assumed clustering of data that were truly independent yielded type I error rates of up to 40%. Partial and complete misspecifications of membership (where some and no knowledge of true membership were incorporated into assumptions) for data generated from a large number of clusters (50) with a moderate ICC (0.20) yielded type I error rates that ranged from 7.2% to 9.1% and 10.5% to 45.6%, respectively; incorrectly assuming independence gave a type I error rate of 10.5%. REs of standard errors can be useful when the ICC and knowledge of cluster membership are high. When the ICC is weak, a number of factors must be considered. Our findings suggest guidelines for making sensible analytic choices in the presence of clustering. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Development and estimation of a semi-compensatory model with a flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    2012-01-01

    distributed error terms across alternatives at the choice stage. This study relaxes the assumption by introducing nested substitution patterns and alternatively random taste heterogeneity at the choice stage, thus equating the structural flexibility of semi-compensatory models to their compensatory...... counterparts. The proposed model is applied to off-campus rental apartment choice by students. Results show the feasibility and importance of introducing a flexible error structure into semi-compensatory models....

  12. Estimation of the Coefficient of Variation with Minimum Risk: A Sequential Method for Minimizing Sampling Error and Study Cost.

    Science.gov (United States)

    Chattopadhyay, Bhargab; Kelley, Ken

    2016-01-01

    The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.

  13. Performance Analysis of Amplify-and-Forward Two-Way Relaying with Co-Channel Interference and Channel Estimation Error

    KAUST Repository

    Liang Yang,

    2013-06-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we first consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation errors and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use them to evaluate the outage probability, error probability, and achievable rate. Subsequently, to investigate the joint effects of the channel estimation error and CCI on the system performance, we extend our analysis to a multiple-relay network and derive several asymptotic performance expressions. For comparison purposes, we also provide the analysis for the relay selection scheme under the total power constraint at the relays. For AF TWRN with channel estimation error and CCI, numerical results show that the performance of the relay selection scheme is not always better than that of the all-relay participating case. In particular, the relay selection scheme can improve the system performance in the case of high power levels at the sources and small powers at the relays.

  14. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    Science.gov (United States)

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2018-04-01

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  15. Performance analysis of amplify-and-forward two-way relaying with co-channel interference and channel estimation error

    KAUST Repository

    Yang, Liang

    2013-04-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation error and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use these expressions to evaluate the outage probability and error probability. Numerical results show that the approximate closed-form expressions are very close to the exact ones. © 2013 IEEE.

  16. Improvement of the estimate of the speaker-error microphone transfer function in an active noise controller

    Science.gov (United States)

    Minguez, Antonio; Recuero, M.

    A new method is proposed to improve the estimate of the transfer function between the loudspeaker and the error microphone (H(z)) with the help of an auxiliary white noise. This method made the plant acoustic noise level independent from the auxiliary random noise, and thus, the amplitude of this noise can be reduced improving the total system performance. The idea is to remove the acoustic noise from the error signal with adaptive schemes, thus, the plant acoustic noise will not affect the convergence of the LMS FIR algorithm used in a direct scheme of identification of H(z).

  17. Estimation of pore size distribution using concentric double pulsed-field gradient NMR.

    Science.gov (United States)

    Benjamini, Dan; Nevo, Uri

    2013-05-01

    Estimation of pore size distribution of well calibrated phantoms using NMR is demonstrated here for the first time. Porous materials are a central constituent in fields as diverse as biology, geology, and oil drilling. Noninvasive characterization of monodisperse porous samples using conventional pulsed-field gradient (PFG) NMR is a well-established method. However, estimation of pore size distribution of heterogeneous polydisperse systems, which comprise most of the materials found in nature, remains extremely challenging. Concentric double pulsed-field gradient (CDPFG) is a 2-D technique where both q (the amplitude of the diffusion gradient) and φ (the relative angle between the gradient pairs) are varied. A recent prediction indicates this method should produce a more accurate and robust estimation of pore size distribution than its conventional 1-D versions. Five well defined size distribution phantoms, consisting of 1-5 different pore sizes in the range of 5-25 μm were used. The estimated pore size distributions were all in good agreement with the known theoretical size distributions, and were obtained without any a priori assumption on the size distribution model. These findings support that in addition to its theoretical benefits, the CDPFG method is experimentally reliable. Furthermore, by adding the angle parameter, sensitivity to small compartment sizes is increased without the use of strong gradients, thus making CDPFG safe for biological applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t ite......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point.......We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller...

  19. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t ite......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point.......We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller...

  20. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  1. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  2. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  3. The influence of digitisation and timing errors on the estimation of tidal components at Split (Adriatic Sea

    Directory of Open Access Journals (Sweden)

    I. Vilibic

    2006-07-01

    Full Text Available The paper comprises the calculations of amplitudes and phases of tidal harmonic constituents, performed on hourly sea level data recorded at the Split tide gauge in the period 1957-2001. Interannual changes in all constituents have been detected, stronger in phases than in amplitudes. For example, the estimated change in M2 amplitude and phase is 22% (1.31 cm and 24.9° between the 1962–1978 and 1957–1961 periods, respectively. Some of the differences are generated artificially throughout the measurements (clock errors, positioning and stretching of a chart and within the digitising procedure, rather than by natural processes and changes (e.g. changes in mean sea level. This is the reason why the M2 and K1 amplitudes were recomputed with 3–4 mm larger values using newer software, thereby decreasing their standard deviation by 60–70% in the 1986–1995 period. Artificial errors may be reduced by the upgrading of digitising software; however, most of the errors still remain in the series. These errors may have repercussions when trying to explain some unusual findings: the energy of de-tided sea level series at the M2 tidal period (12.4 h has been assumed previously to be a result of nonlinear coupling, but it may be caused, at least partly, by timing errors in the time series.

  4. The influence of digitisation and timing errors on the estimation of tidal components at Split (Adriatic Sea

    Directory of Open Access Journals (Sweden)

    I. Vilibic

    2006-07-01

    Full Text Available The paper comprises the calculations of amplitudes and phases of tidal harmonic constituents, performed on hourly sea level data recorded at the Split tide gauge in the period 1957-2001. Interannual changes in all constituents have been detected, stronger in phases than in amplitudes. For example, the estimated change in M2 amplitude and phase is 22% (1.31 cm and 24.9° between the 1962–1978 and 1957–1961 periods, respectively. Some of the differences are generated artificially throughout the measurements (clock errors, positioning and stretching of a chart and within the digitising procedure, rather than by natural processes and changes (e.g. changes in mean sea level. This is the reason why the M2 and K1 amplitudes were recomputed with 3–4 mm larger values using newer software, thereby decreasing their standard deviation by 60–70% in the 1986–1995 period. Artificial errors may be reduced by the upgrading of digitising software; however, most of the errors still remain in the series. These errors may have repercussions when trying to explain some unusual findings: the energy of de-tided sea level series at the M2 tidal period (12.4 h has been assumed previously to be a result of nonlinear coupling, but it may be caused, at least partly, by timing errors in the time series.

  5. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    Science.gov (United States)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2015-03-01

    A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this

  6. Evapotranspiration estimates and consequences due to errors in the determination of the net radiation and advective effects

    International Nuclear Information System (INIS)

    Oliveira, G.M. de; Leitao, M. de M.V.B.R.

    2000-01-01

    The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt

  7. Errors in second moments estimated from monostatic Doppler sodar winds. I. Theoretical description

    DEFF Research Database (Denmark)

    Kristensen, Leif; Gaynor, J. E.

    1986-01-01

    Presents a theoretical derivation of the errors in calculated second moments arising from the temporal and spatial separation between individual wind measurements obtained from three-axis colocated monostatic Doppler sodar systems. The derived relations require as input the sodar monostatic...

  8. A Flexible Galerkin Finite Element Method with an A Posteriori Discontinuous Finite Element Error Estimation for Hyperbolic Problems

    OpenAIRE

    Massey, Thomas Christopher

    2002-01-01

    A Flexible Galerkin Finite Element Method (FGM) is a hybrid class of finite element methods that combine the usual continuous Galerkin method with the now popular discontinuous Galerkin method (DGM). A detailed description of the formulation of the FGM on a hyperbolic partial differential equation, as well as the data structures used in the FGM algorithm is presented. Some hp-convergence results and computational cost are included. Additionally, an a posteriori error estimate f...

  9. On the use of robust estimators for standard errors in the presence of clustering when clustering membership is misspecified

    OpenAIRE

    Desai, Manisha; Bryson, Susan W.; Robinson, Thomas

    2012-01-01

    This paper examines the implications of using robust estimators (REs) of standard errors in the presence of clustering when cluster membership is unclear as may commonly occur in clustered randomized trials. For example, in such trials, cluster membership may not be recorded for one or more treatment arms and/or cluster membership may be dynamic. When clusters are well defined, REs have properties that are robust to misspecification of the correlation structure. To examine whether results wer...

  10. Statistical error in a chord estimator of correlation dimension: The ''rule of five''

    International Nuclear Information System (INIS)

    Theiler, J.; Lookman, T.

    1992-01-01

    The statistical precision of a chord method for estimating dimension from a correlation integral is derived. The optimal chord length is determined, and a comparison is made to other estimators. The simple chord estimator is only 25% less precise than the optimal estimator which uses the full resolution and full range of the correlation integral. The analytic calculations are based on the hypothesis that all pairwise distances between the points in the embedding space are statistically independent. The adequacy of this approximation is assessed numerically, and a surprising result is observed in which dimension estimators can be anomalously precise for sets with reasonably uniform (nonfractal) distributions

  11. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris.

    Science.gov (United States)

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-07-22

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the

  12. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  13. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO...... and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results...

  14. Modeling of the effect of tool wear per discharge estimation error on the depth of machined cavities in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    2017-01-01

    In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...

  15. Expert estimation of human error probabilities in nuclear power plant operations: a review of probability assessment and scaling

    International Nuclear Information System (INIS)

    Stillwell, W.G.; Seaver, D.A.; Schwartz, J.P.

    1982-05-01

    This report reviews probability assessment and psychological scaling techniques that could be used to estimate human error probabilities (HEPs) in nuclear power plant operations. The techniques rely on expert opinion and can be used to estimate HEPs where data do not exist or are inadequate. These techniques have been used in various other contexts and have been shown to produce reasonably accurate probabilities. Some problems do exist, and limitations are discussed. Additional topics covered include methods for combining estimates from multiple experts, the effects of training on probability estimates, and some ideas on structuring the relationship between performance shaping factors and HEPs. Preliminary recommendations are provided along with cautions regarding the costs of implementing the recommendations. Additional research is required before definitive recommendations can be made

  16. Forest canopy height estimation using double-frequency repeat pass interferometry

    Science.gov (United States)

    Karamvasis, Kleanthis; Karathanassi, Vassilia

    2015-06-01

    In recent years, many efforts have been made in order to assess forest stand parameters from remote sensing data, as a mean to estimate the above-ground carbon stock of forests in the context of the Kyoto protocol. Synthetic aperture radar interferometry (InSAR) techniques have gained traction in last decade as a viable technology for vegetation parameter estimation. Many works have shown that forest canopy height, which is a critical parameter for quantifying the terrestrial carbon cycle, can be estimated with InSAR. However, research is still needed to understand further the interaction of SAR signals with forest canopy and to develop an operational method for forestry applications. This work discusses the use of repeat pass interferometry with ALOS PALSAR (L band) HH polarized and COSMO Skymed (X band) HH polarized acquisitions over the Taxiarchis forest (Chalkidiki, Greece), in order to produce accurate digital elevation models (DEMs) and estimate canopy height with interferometric processing. The effect of wavelength-dependent penetration depth into the canopy is known to be strong, and could potentially lead to forest canopy height mapping using dual-wavelength SAR interferometry at X- and L-band. The method is based on scattering phase center separation at different wavelengths. It involves the generation of a terrain elevation model underneath the forest canopy from repeat-pass L-band InSAR data as well as the generation of a canopy surface elevation model from repeat pass X-band InSAR data. The terrain model is then used to remove the terrain component from the repeat pass interferometric X-band elevation model, so as to enable the forest canopy height estimation. The canopy height results were compared to a field survey with 6.9 m root mean square error (RMSE). The effects of vegetation characteristics, SAR incidence angle and view geometry, and terrain slope on the accuracy of the results have also been studied in this work.

  17. An Implementation of Error Minimization Position Estimate in Wireless Inertial Measurement Unit using Modification ZUPT

    Directory of Open Access Journals (Sweden)

    Adytia Darmawan

    2016-12-01

    Full Text Available Position estimation using WIMU (Wireless Inertial Measurement Unit is one of emerging technology in the field of indoor positioning systems. WIMU can detect movement and does not depend on GPS signals. The position is then estimated using a modified ZUPT (Zero Velocity Update method that was using Filter Magnitude Acceleration (FMA, Variance Magnitude Acceleration (VMA and Angular Rate (AR estimation. Performance of this method was justified on a six-legged robot navigation system. Experimental result shows that the combination of VMA-AR gives the best position estimation.

  18. A human error probability estimate methodology based on fuzzy inference and expert judgment on nuclear plants

    International Nuclear Information System (INIS)

    Nascimento, C.S. do; Mesquita, R.N. de

    2009-01-01

    Recent studies point human error as an important factor for many industrial and nuclear accidents: Three Mile Island (1979), Bhopal (1984), Chernobyl and Challenger (1986) are classical examples. Human contribution to these accidents may be better understood and analyzed by using Human Reliability Analysis (HRA), which has being taken as an essential part on Probabilistic Safety Analysis (PSA) of nuclear plants. Both HRA and PSA depend on Human Error Probability (HEP) for a quantitative analysis. These probabilities are extremely affected by the Performance Shaping Factors (PSF), which has a direct effect on human behavior and thus shape HEP according with specific environment conditions and personal individual characteristics which are responsible for these actions. This PSF dependence raises a great problem on data availability as turn these scarcely existent database too much generic or too much specific. Besides this, most of nuclear plants do not keep historical records of human error occurrences. Therefore, in order to overcome this occasional data shortage, a methodology based on Fuzzy Inference and expert judgment was employed in this paper in order to determine human error occurrence probabilities and to evaluate PSF's on performed actions by operators in a nuclear power plant (IEA-R1 nuclear reactor). Obtained HEP values were compared with reference tabled data used on current literature in order to show method coherence and valid approach. This comparison leads to a conclusion that this work results are able to be employed both on HRA and PSA enabling efficient prospection of plant safety conditions, operational procedures and local working conditions potential improvements (author)

  19. Estimating the parameters of nonspinning binary black holes using ground-based gravitational-wave detectors: Statistical errors

    International Nuclear Information System (INIS)

    Ajith, P.; Bose, Sukanta

    2009-01-01

    We assess the statistical errors in estimating the parameters of nonspinning black hole binaries using ground-based gravitational-wave detectors. While past assessments were based on partial information provided by only the inspiral and/or ring-down pieces of the coalescence signal, the recent progress in analytical and numerical relativity enables us to make more accurate projections using complete inspiral-merger-ring-down waveforms. We employ the Fisher information-matrix formalism to estimate how accurately the source parameters will be measurable using a single interferometric detector as well as a network of interferometers. Those estimates are further vetted by full-fledged Monte Carlo simulations. We find that the parameter accuracies of the complete waveform are, in general, significantly better than those of just the inspiral waveform in the case of binaries with total mass M > or approx. 20M · . In particular, for the case of the Advanced LIGO detector, parameter estimation is the most accurate in the M=100-200M · range. For an M=100M · system, the errors in measuring the total mass and the symmetric mass-ratio are reduced by an order of magnitude or more compared to inspiral waveforms. Furthermore, for binaries located at a fixed luminosity distance d L , and observed with the Advanced LIGO-Advanced Virgo network, the sky-position error is expected to vary widely across the sky: For M=100M · systems at d L =1 Gpc, this variation ranges mostly from about a hundredth of a square degree to about a square degree, with an average value of nearly a tenth of a square degree. This is more than 40 times better than the average sky-position accuracy of inspiral waveforms at this mass range. For the mass parameters as well as the sky position, this improvement in accuracy is due partly to the increased signal-to-noise ratio and partly to the information about these parameters harnessed through the post-inspiral phases of the waveform. The error in estimating d

  20. Impacts of real-time satellite clock errors on GPS precise point positioning-based troposphere zenith delay estimation

    Science.gov (United States)

    Shi, Junbo; Xu, Chaoqian; Li, Yihe; Gao, Yang

    2015-08-01

    Global Positioning System (GPS) has become a cost-effective tool to determine troposphere zenith total delay (ZTD) with accuracy comparable to other atmospheric sensors such as the radiosonde, the water vapor radiometer, the radio occultation and so on. However, the high accuracy of GPS troposphere ZTD estimates relies on the precise satellite orbit and clock products available with various latencies. Although the International GNSS Service (IGS) can provide predicted orbit and clock products for real-time applications, the predicted clock accuracy of 3 ns cannot always guarantee the high accuracy of troposphere ZTD estimates. Such limitations could be overcome by the use of the newly launched IGS real-time service which provides 5 cm orbit and 0.2-1.0 ns (an equivalent range error of 6-30 cm) clock products in real time. Considering the relatively larger magnitude of the clock error than that of the orbit error, this paper investigates the effect of real-time satellite clock errors on the GPS precise point positioning (PPP)-based troposphere ZTD estimation. Meanwhile, how the real-time satellite clock errors impact the GPS PPP-based troposphere ZTD estimation has also been studied to obtain the most precise ZTD solutions. First, two types of real-time satellite clock products are assessed with respect to the IGS final clock product in terms of accuracy and precision. Second, the real-time GPS PPP-based troposphere ZTD estimation is conducted using data from 34 selected IGS stations over three independent weeks in April, July and October, 2013. Numerical results demonstrate that the precision, rather than the accuracy, of the real-time satellite clock products impacts the real-time PPP-based ZTD solutions more significantly. In other words, the real-time satellite clock product with better precision leads to more precise real-time PPP-based troposphere ZTD solutions. Therefore, it is suggested that users should select and apply real-time satellite products with

  1. Biased Estimators in Explanatory Research: An Empirical Investigation of Mean Error Properties of Ridge Regression.

    Science.gov (United States)

    Kennedy, Eugene

    1988-01-01

    Ridge estimates (REs) of population beta weights were compared to ordinary least squares (OLS) estimates through computer simulation to evaluate the use of REs in explanatory research. With fixed predictors, there was some question of the consistency of ridge regression, but with random predictors, REs were superior to OLS. (SLD)

  2. ESTIMATING HIGH LEVEL WASTE MIXING PERFORMANCE IN HANFORD DOUBLE SHELL TANKS

    International Nuclear Information System (INIS)

    Thien, M.G.; Greer, D.A.; Townson, P.

    2011-01-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of high level waste (HLW) feed from the Hanford double shell tanks (DSTs) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. The Department of Energy's (DOE's) Tank Operations Contractor (TOC), Washington River Protection Solutions (WRPS) is currently demonstrating mixing, sampling, and batch transfer performance in two different sizes of small-scale DSTs. The results of these demonstrations will be used to estimate full-scale DST mixing performance and provide the key input to a programmatic decision on the need to build a dedicated feed certification facility. This paper discusses the results from initial mixing demonstration activities and presents data evaluation techniques that allow insight into the performance relationships of the two small tanks. The next steps, sampling and batch transfers, of the small scale demonstration activities are introduced. A discussion of the integration of results from the mixing, sampling, and batch transfer tests to allow estimating full-scale DST performance is presented.

  3. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    Science.gov (United States)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  4. Analysis of a HP-refinement method for solving the neutron transport equation using two error estimators

    Energy Technology Data Exchange (ETDEWEB)

    Fournier, D.; Le Tellier, R.; Suteau, C., E-mail: damien.fournier@cea.fr, E-mail: romain.le-tellier@cea.fr, E-mail: christophe.suteau@cea.fr [CEA, DEN, DER/SPRC/LEPh, Cadarache, Saint Paul-lez-Durance (France); Herbin, R., E-mail: raphaele.herbin@cmi.univ-mrs.fr [Laboratoire d' Analyse et de Topologie de Marseille, Centre de Math´ematiques et Informatique (CMI), Universit´e de Provence, Marseille Cedex (France)

    2011-07-01

    The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)

  5. Analysis of a HP-refinement method for solving the neutron transport equation using two error estimators

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.

    2011-01-01

    The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)

  6. mBEEF-vdW: Robust fitting of error estimation density functionals

    DEFF Research Database (Denmark)

    Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes

    2016-01-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces......, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we...

  7. Bayesian estimation of observation error covariance matrix in the equatorial Pacific

    Science.gov (United States)

    Ueno, G.

    2016-02-01

    We develop a Bayesian technique for estimating the parameters in the observation noise covariance matrix Rt for ensemble data assimilation. We design a posterior distribution by using the ensemble-approximated likelihood and a Wishart prior distribution and present an iterative algorithm for parameter estimation. The present algorithm is identified as the expectation-maximization (EM) algorithm for a Gaussian mixture model and can estimate a number of parameters in Rt. The algorithm is an extension of that by Ueno and Nakamura (2014) for maximum-likelihood estimation. An advantage of the proposed method is that Rt can be estimated online, and more importantly, the temporal smoothness of Rt can be controlled by adequately choosing two parameters of the prior distribution, the covariance matrix S and the number of degrees of freedom ν. The parameters S and ν may vary with the time at which Rt is estimated. The ν parameter can be objectively estimated by maximizing the marginal likelihood. The present formalism can handle cases in which the number of data points or the data positions varies with time, the former case of which is exemplified in the experiments. We present an application to a coupled atmosphere-ocean model under each of the following assumptions: Rt is a scalar multiple of a fixed matrix (Rt=αtΣ, where αt is the scalar parameter and Σ is the fixed matrix), Rt is a diagonal matrix, Rt has fixed eigenvectors, or Rt has no specific structure. We verify that the proposed algorithm works well and that only a limited number of iterations are necessary. When Rt has one of the structures mentioned above, by assuming the prior covariance matrix to be the previous estimate, namely S=\\hat{R}t-1, we obtain the Bayesian estimate of Rt that varies smoothly in time compared to the maximum-likelihood estimate at each time. When Rt has no specific structure, we need to regularize S=\\hat{R}t-1 to maintain the positive-definiteness of S. Through twin experiments

  8. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  9. A FEM approximation of a two-phase obstacle problem and its a posteriori error estimate

    Czech Academy of Sciences Publication Activity Database

    Bozorgnia, F.; Valdman, Jan

    2017-01-01

    Roč. 73, č. 3 (2017), s. 419-432 ISSN 0898-1221 R&D Projects: GA ČR(CZ) GF16-34894L; GA MŠk(CZ) 7AMB16AT015 Institutional support: RVO:67985556 Keywords : A free boundary problem * A posteriori error analysis * Finite element method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.531, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/valdman-0470507.pdf

  10. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  11. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    For pt.I see ibid., vol.3, no.3, p.523-8 (1986). The authors use the theoretical results presented in part I to correct turbulence parameters derived from monostatic sodar wind measurements in an attempt to improve the statistical comparisons with the sonic anemometers on the Boulder Atmospheric...... Observatory tower. The approximate magnitude of the error due to spatial and temporal pulse volume separation is presented as a function of mean wind angle relative to the sodar configuration and for several antenna pulsing orders. Sodar-derived standard deviations of the lateral wind component, before...

  12. Usual Dietary Intakes: SAS Macros for Fitting Multivariate Measurement Error Models & Estimating Multivariate Usual Intake Distributions

    Science.gov (United States)

    The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.

  13. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  14. Estimated yield of double-strand breaks from internal exposure to tritium

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jing [Health Canada, Radiation Protection Bureau, Ottawa, ON (Canada)

    2012-08-15

    Internal exposure to tritium may result in DNA lesions. Of those, DNA double-strand breaks (DSBs) are believed to be important. However, experimental and computational data of DSBs induction by tritium are very limited. In this study, microdosimetric characteristics of uniformly distributed tritium were determined in dimensions of critical significance in DNA DSBs. Those characteristics were used to identify other particles comparable to tritium in terms of microscopic energy deposition. The yield of DSBs could be strongly dependent on biological systems and cellular environments. After reviewing theoretically predicted and experimentally determined DSB yields available in the literature for low-energy electrons and high-energy protons of comparable microdosimetric characteristics to tritium in the dimensions relevant to DSBs, it is estimated that the average DSB yields of 2.7 x 10{sup -11}, 0.93 x 10{sup -11}, 2.4 x 10{sup -11} and 1.6 x 10{sup -11} DSBs Gy{sup -1} Da{sup -1} could be reasonable estimates for tritium in plasmid DNAs, yeast cells, Chinese hamster V79 cells and human fibroblasts, respectively. If a biological system is not specified, the DSB yield from tritium exposure can be estimated as (2.3 ± 0.7) x 10{sup -11} DSBs Gy{sup -1} Da{sup -1}, which is a simple average over experimentally determined yields of DSBs for low-energy electrons in various biological systems without considerations of variations caused by different techniques used and obvious differences among different biological systems where the DSB yield was measured. (orig.)

  15. Estimated yield of double-strand breaks from internal exposure to tritium.

    Science.gov (United States)

    Chen, Jing

    2012-08-01

    Internal exposure to tritium may result in DNA lesions. Of those, DNA double-strand breaks (DSBs) are believed to be important. However, experimental and computational data of DSBs induction by tritium are very limited. In this study, microdosimetric characteristics of uniformly distributed tritium were determined in dimensions of critical significance in DNA DSBs. Those characteristics were used to identify other particles comparable to tritium in terms of microscopic energy deposition. The yield of DSBs could be strongly dependent on biological systems and cellular environments. After reviewing theoretically predicted and experimentally determined DSB yields available in the literature for low-energy electrons and high-energy protons of comparable microdosimetric characteristics to tritium in the dimensions relevant to DSBs, it is estimated that the average DSB yields of 2.7 × 10(-11), 0.93 × 10(-11), 2.4 × 10(-11) and 1.6 × 10(-11) DSBs Gy(-1) Da(-1) could be reasonable estimates for tritium in plasmid DNAs, yeast cells, Chinese hamster V79 cells and human fibroblasts, respectively. If a biological system is not specified, the DSB yield from tritium exposure can be estimated as (2.3 ± 0.7) × 10(-11) DSBs Gy(-1) Da(-1), which is a simple average over experimentally determined yields of DSBs for low-energy electrons in various biological systems without considerations of variations caused by different techniques used and obvious differences among different biological systems where the DSB yield was measured.

  16. Identifying grain-size dependent errors on global forest area estimates and carbon studies

    Science.gov (United States)

    Daolan Zheng; Linda S. Heath; Mark J. Ducey

    2008-01-01

    Satellite-derived coarse-resolution data are typically used for conducting global analyses. But the forest areas estimated from coarse-resolution maps (e.g., 1 km) inevitably differ from a corresponding fine-resolution map (such as a 30-m map) that would be closer to ground truth. A better understanding of changes in grain size on area estimation will improve our...

  17. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  18. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  19. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  20. Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates

    Science.gov (United States)

    Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.

    2017-12-01

    An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.

  1. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    Science.gov (United States)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  2. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  3. A generalized adjoint framework for sensitivity and global error estimation in time-dependent nuclear reactor simulations

    International Nuclear Information System (INIS)

    Stripling, H.F.; Anitescu, M.; Adams, M.L.

    2013-01-01

    Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint

  4. Error analysis of ultrasonic tissue doppler velocity estimation techniques for quantification of velocity and strain.

    Science.gov (United States)

    Bennett, Michael J; McLaughlin, Steve; Anderson, Tom; McDicken, W Norman

    2007-01-01

    Recent work in the field of Doppler tissue imaging has focused mainly on the quantification of results involving the use of techniques of strain and strain-rate imaging. These results are based on measuring a velocity gradient between two points, a known distance apart, in the region-of-interest. Although many recent publications have demonstrated the potential of this technique in clinical terms, the method still suffers from low repeatability. The work presented here demonstrates, through the use of a rotating phantom arrangement and a custom developed single element ultrasound system, that this is a consequence of the fundamental accuracy of the technique used to estimate the original velocities. Results are presented comparing the performance of the conventional Kasai autocorrelation velocity estimator with those obtained using time domain cross-correlation and the complex cross-correlation model based estimator. The results demonstrate that the complex cross-correlation model based technique is able to offer lower standard deviations of the velocity gradient estimations compared with the Kasai algorithm.

  5. Minimum count sums for charcoal concentration estimates in pollen slides: accuracy and potential errors

    NARCIS (Netherlands)

    Finsinger, W.; Tinner, W.

    2005-01-01

    Charcoal particles in pollen slides are often abundant, and thus analysts are faced with the problem of setting the minimum counting sum as small as possible in order to save time. We analysed the reliability of charcoal-concentration estimates based on different counting sums, using simulated

  6. Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects

    Science.gov (United States)

    Lockwood, J. R.; McCaffrey, Daniel F.

    2014-01-01

    A common strategy for estimating treatment effects in observational studies using individual student-level data is analysis of covariance (ANCOVA) or hierarchical variants of it, in which outcomes (often standardized test scores) are regressed on pretreatment test scores, other student characteristics, and treatment group indicators. Measurement…

  7. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  8. Errors in estimation of the input signal for integrate-and-fire neuronal models

    Czech Academy of Sciences Publication Activity Database

    Bibbona, E.; Lánský, Petr; Sacerdote, L.; Sirovich, R.

    2008-01-01

    Roč. 78, č. 1 (2008), s. 1-10 ISSN 1539-3755 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Grant - others:EC(XE) MIUR PRIN 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic neuronal model Subject RIV: BO - Biophysics Impact factor: 2.508, year: 2008 http://link.aps.org/abstract/PRE/v78/e011918

  9. Modeling error and apparent isotope discrimination confound estimation of endogenous glucose production during euglycemic glucose clamps

    International Nuclear Information System (INIS)

    Finegood, D.T.; Bergman, R.N.; Vranic, M.

    1988-01-01

    We previously demonstrated that conventional tracer methods applied to euglycemic-hyperinsulinemic glucose clamps result in substantially negative estimates for the rate of endogenous glucose production, particularly during the first half of 180-min clamps. We also showed that addition of tracer to the exogenous glucose infusate resulted in nonnegative endogenous glucose production (Ra) estimates. In this study, we investigated the underlying cause of negative estimates of Ra from conventional clamp/tracer methods and the reason for the difference in estimates when tracer is added to the exogenous glucose infusate. We performed euglycemic-hyperinsulinemic (300-microU/ml) clamps in normal dogs without (cold GINF protocol, n = 6) or with (hot GINF protocol, n = 6) tracer (D-[3-3H]glucose) added to the exogenous glucose infusate. In the hot GINF protocol, sufficient tracer was added to the exogenous glucose infusate such that arterial plasma specific activity (SAa) did not change from basal through the clamp period (P greater than .05). In the cold GINF studies, plasma SAa fell 81 +/- 2% from the basal level by the 3rd h of clamping. We observed a significant, transient, positive venous-arterial difference in specific activity (SAv-SAa difference) during the cold GINF studies. The SAv-SAa difference reached a peak of 27 +/- 6% at 30 min and diminished to a plateau of 7 +/- 1% between 70 and 180 min. We also observed a positive but constant SAv-SAa difference (4.6 +/- 0.2% between 10 and 180 min) during the hot GINF studies

  10. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    Science.gov (United States)

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  11. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  12. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  13. Error analysis of the double-integral method for calculating brain blood perfusion from inert gas clearance data

    International Nuclear Information System (INIS)

    Smith, G.T.; Stokely, E.M.; Lewis, M.H.; Devous, M.D. Sr.; Bonte, F.J.

    1984-01-01

    A single-photon dynamic computer-assisted tomograph (DSPECT) has been built and is currently being used to evaluate regional cerebral blood perfusion in patients and volunteers. A computer simulation of the system was created to analyze the effects of data collection, Poisson noise, attenuation compensation, and the reconstruction technique now employed in the DSPECT. Several methods of attenuation compensation were used to generate perfusion images from both ideal and noisy data. The results indicate that the mean perfusion is calculated to within 10.4% accuracy for all perfusion rates in a region of interest if attenuation correction is used. Without attenuation correction, perfusions are underestimated by as much as 27%. The three correctors tested have different effects on the calculated perfusion value, depending on the location of the region of interest in the picture. The algorithm introduces random noise that is proportional to both the random error in the input data and the perfusion rate. Air-curve delay errors result in inaccuracies in the final perfusion picture that are proportional to perfusion rate. Physiological values (0.8-1.5) of the partition coefficient cause overestimation of both gray (0-34%) and white (7-67%) matter perfusion values. Compton scatter and collimator effects were not addressed in this study

  14. Error analysis of the double-integral method for calculating brain blood perfusion from inert gas clearance data.

    Science.gov (United States)

    Smith, G T; Stokely, E M; Lewis, M H; Devous, M D; Bonte, F J

    1984-03-01

    A single-photon dynamic computer-assisted tomograph (DSPECT) has been built and is currently being used to evaluate regional cerebral blood perfusion in patients and volunteers. A computer simulation of the system was created to analyze the effects of data collection, Poisson noise, attenuation compensation, and the reconstruction technique now employed in the DSPECT. Several methods of attenuation compensation were used to generate perfusion images from both ideal and noisy data. The results indicate that the mean perfusion is calculated to within 10.4% accuracy for all perfusion rates in a region of interest if attenuation correction is used. Without attenuation correction, perfusions are underestimated by as much as 27%. The three correctors tested have different effects on the calculated perfusion value, depending on the location of the region of interest in the picture. The algorithm introduces random noise that is proportional to both the random error in the input data and the perfusion rate. Air-curve delay errors result in inaccuracies in the final perfusion picture that are proportional to perfusion rate. Physiological values (0.8-1.5) of the partition coefficient cause overestimation of both gray (0-34%) and white (7-67%) matter perfusion values. Compton scatter and collimator effects were not addressed in this study.

  15. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  16. Evaluating EIV, OLS, and SEM Estimators of Group Slope Differences in the Presence of Measurement Error: The Single-Indicator Case

    Science.gov (United States)

    Culpepper, Steven Andrew

    2012-01-01

    Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…

  17. Estimating the Error of an Analog Quantum Simulator by Additional Measurements

    Science.gov (United States)

    Schwenk, Iris; Zanker, Sebastian; Reiner, Jan-Michael; Leppäkangas, Juha; Marthaler, Michael

    2017-12-01

    We study an analog quantum simulator coupled to a reservoir with a known spectral density. The reservoir perturbs the quantum simulation by causing decoherence. The simulator is used to measure an operator average, which cannot be calculated using any classical means. Since we cannot predict the result, it is difficult to estimate the effect of the environment. Especially, it is difficult to resolve whether the perturbation is small or if the actual result of the simulation is in fact very different from the ideal system we intend to study. Here, we show that in specific systems a measurement of additional correlators can be used to verify the reliability of the quantum simulation. The procedure only requires additional measurements on the quantum simulator itself. We demonstrate the method theoretically in the case of a single spin connected to a bosonic environment.

  18. A switch from high-fidelity to error-prone DNA double-strand break repair underlies stress-induced mutation.

    Science.gov (United States)

    Ponder, Rebecca G; Fonville, Natalie C; Rosenberg, Susan M

    2005-09-16

    Special mechanisms of mutation are induced in microbes under growth-limiting stress causing genetic instability, including occasional adaptive mutations that may speed evolution. Both the mutation mechanisms and their control by stress have remained elusive. We provide evidence that the molecular basis for stress-induced mutagenesis in an E. coli model is error-prone DNA double-strand break repair (DSBR). I-SceI-endonuclease-induced DSBs strongly activate stress-induced mutations near the DSB, but not globally. The same proteins are required as for cells without induced DSBs: DSBR proteins, DinB-error-prone polymerase, and the RpoS starvation-stress-response regulator. Mutation is promoted by homology between cut and uncut DNA molecules, supporting a homology-mediated DSBR mechanism. DSBs also promote gene amplification. Finally, DSBs activate mutation only during stationary phase/starvation but will during exponential growth if RpoS is expressed. Our findings reveal an RpoS-controlled switch from high-fidelity to mutagenic DSBR under stress. This limits genetic instability both in time and to localized genome regions, potentially important evolutionary strategies.

  19. Estimating the agricultural demand for natural gas and liquefied petroleum gas in the presence of measurement error in the data

    International Nuclear Information System (INIS)

    Uri, N.D.

    1994-01-01

    The paper begins by discussing the importance of accurate estimates of the price elasticity of demand and some of the problems frequently encountered in obtaining these estimates. To these problems is added that associated with inaccuracy in the measurement of the dependent variable and one or more of the independent variables that affect the quantity demanded. Two diagnostics, i.e. the regression coefficient bounds and the bias correction factor, have been introduced to assess the effect that such measurement error has on the estimated coefficients of demand relationships. The regression coefficient bounds diagnostic was used to indicate a range over which the true price responsiveness of farmers to changes in energy prices lies. The results suggest that each 1% increase (decrease) in the price of energy will result in a decrease (increase) of between 0.41 and 0.17% in the quantity of natural gas demanded and a decrease (increase) of between 0.48 and 0.07% in the quantity of liquefied petroleum gas demanded. (author)

  20. Developing Calibration Weights and Standard-Error Estimates for a Survey of Drug-Related Emergency-Department Visits

    Directory of Open Access Journals (Sweden)

    Kott Phillip S.

    2014-09-01

    Full Text Available This article describes a two-step calibration-weighting scheme for a stratified simple random sample of hospital emergency departments. The first step adjusts for unit nonresponse. The second increases the statistical efficiency of most estimators of interest. Both use a measure of emergency-department size and other useful auxiliary variables contained in the sampling frame. Although many survey variables are roughly a linear function of the measure of size, response is better modeled as a function of the log of that measure. Consequently the log of size is a calibration variable in the nonresponse-adjustment step, while the measure of size itself is a calibration variable in the second calibration step. Nonlinear calibration procedures are employed in both steps. We show with 2010 DAWN data that estimating variances as if a one-step calibration weighting routine had been used when there were in fact two steps can, after appropriately adjusting the finite-population correct in some sense, produce standard-error estimates that tend to be slightly conservative.

  1. Detecting and estimating errors in 3D restoration methods using analog models.

    Science.gov (United States)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  2. Non Random Distribution of DMD Deletion Breakpoints and Implication of Double Strand Breaks Repair and Replication Error Repair Mechanisms.

    Science.gov (United States)

    Marey, Isabelle; Ben Yaou, Rabah; Deburgrave, Nathalie; Vasson, Aurélie; Nectoux, Juliette; Leturcq, France; Eymard, Bruno; Laforet, Pascal; Behin, Anthony; Stojkovic, Tanya; Mayer, Michèle; Tiffreau, Vincent; Desguerre, Isabelle; Boyer, François Constant; Nadaj-Pakleza, Aleksandra; Ferrer, Xavier; Wahbi, Karim; Becane, Henri-Marc; Claustres, Mireille; Chelly, Jamel; Cossee, Mireille

    2016-05-27

    Dystrophinopathies are mostly caused by copy number variations, especially deletions, in the dystrophin gene (DMD). Despite the large size of the gene, deletions do not occur randomly but mainly in two hot spots, the main one involving exons 45 to 55. The underlying mechanisms are complex and implicate two main mechanisms: Non-homologous end joining (NHEJ) and micro-homology mediated replication-dependent recombination (MMRDR). Our goals were to assess the distribution of intronic breakpoints (BPs) in the genomic sequence of the main hot spot of deletions within DMD gene and to search for specific sequences at or near to BPs that might promote BP occurrence or be associated with DNA break repair. Using comparative genomic hybridization microarray, 57 deletions within the intron 44 to 55 region were mapped. Moreover, 21 junction fragments were sequenced to search for specific sequences. Non-randomly distributed BPs were found in introns 44, 47, 48, 49 and 53 and 50% of BPs clustered within genomic regions of less than 700bp. Repeated elements (REs), known to promote gene rearrangement via several mechanisms, were present in the vicinity of 90% of clustered BPs and less frequently (72%) close to scattered BPs, illustrating the important role of such elements in the occurrence of DMD deletions. Palindromic and TTTAAA sequences, which also promote DNA instability, were identified at fragment junctions in 20% and 5% of cases, respectively. Micro-homologies (76%) and insertions or deletions of small sequences were frequently found at BP junctions. Our results illustrate, in a large series of patients, the important role of RE and other genomic features in DNA breaks, and the involvement of different mechanisms in DMD gene deletions: Mainly replication error repair mechanisms, but also NHEJ and potentially aberrant firing of replication origins. A combination of these mechanisms may also be possible.

  3. Estimation and Testing Based on Data Subject to Measurement Errors: From Parametric to Non-Parametric Likelihood Methods

    Science.gov (United States)

    Vexler, Albert; Tsai, Wan-Min; Malinovsky, Yaakov

    2013-01-01

    Measurement error problems can cause bias or inconsistency of statistical inferences. When investigators are unable to obtain correct measurements of biological assays, special techniques to quantify measurement errors (ME) need to be applied. The sampling based on repeated measurements is a common strategy to allow for ME. This method has been well-addressed in the literature under parametric assumptions. The approach with repeated measures data may not be applicable when the replications are complicated due to cost and/or time concerns. Pooling designs have been proposed as cost-efficient sampling procedures that can assist to provide correct statistical operations based on data subject to ME. We demonstrate that a mixture of both pooled and unpooled data (a hybrid pooled-unpooled design) can support very efficient estimation and testing in the presence of ME. Nonparametric techniques have not been well investigated to analyze repeated measures data or pooled data subject to ME. We propose and examine both the parametric and empirical likelihood methodologies for data subject to ME. We conclude that the likelihood methods based on the hybrid samples are very efficient and powerful. The results of an extensive Monte Carlo study support our conclusions. Real data examples demonstrate the efficiency of the proposed methods in practice. PMID:21805485

  4. Modeling of the effect of tool wear per discharge estimation error on the depth of machined cavities in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    2017-01-01

    In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...... performed on a micro-EDM machine. Simulations and experimental results were found to be in good agreement, showing the effect of errror amplification through the cavity depth....

  5. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    Science.gov (United States)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round

  6. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  7. Estimating outcomes and cost effectiveness using a single-arm clinical trial: ofatumumab for double-refractory chronic lymphocytic leukemia

    OpenAIRE

    Hatswell, Anthony J.; Thompson, Gwilym J.; Maroudas, Penny A.; Sofrygin, Oleg; Delea, Thomas E.

    2017-01-01

    Background Ofatumumab (Arzerra?, Novartis) is a treatment for chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab [double refractory (DR-CLL)]. Ofatumumab was licensed on the basis of an uncontrolled Phase II study, Hx-CD20-406, in which patients receiving ofatumumab survived for a median of 13.9?months. However, the lack of an internal control arm presents an obstacle for the estimation of comparative effectiveness. Methods The objective of the study was to present a metho...

  8. Understanding the Nature of Measurement Error When Estimating Energy Expenditure and Physical Activity via Physical Activity Recall.

    Science.gov (United States)

    Paul, David R; McGrath, Ryan; Vella, Chantal A; Kramer, Matthew; Baer, David J; Moshfegh, Alanna J

    2018-03-26

    The National Health and Nutrition Examination Survey physical activity questionnaire (PAQ) is used to estimate activity energy expenditure (AEE) and moderate to vigorous physical activity (MVPA). Bias and variance in estimates of AEE and MVPA from the PAQ have not been described, nor the impact of measurement error when utilizing the PAQ to predict biomarkers and categorize individuals. The PAQ was administered to 385 adults to estimate AEE (AEE:PAQ) and MVPA (MVPA:PAQ), while simultaneously measuring AEE with doubly labeled water (DLW; AEE:DLW) and MVPA with an accelerometer (MVPA:A). Although AEE:PAQ [3.4 (2.2) MJ·d -1 ] was not significantly different from AEE:DLW [3.6 (1.6) MJ·d -1 ; P > .14], MVPA:PAQ [36.2 (24.4) min·d -1 ] was significantly higher than MVPA:A [8.0 (10.4) min·d -1 ; P PAQ regressed on AEE:DLW and MVPA:PAQ regressed on MVPA:A yielded not only significant positive relationships but also large residual variances. The relationships between AEE and MVPA, and 10 of the 12 biomarkers were underestimated by the PAQ. When compared with accelerometers, the PAQ overestimated the number of participants who met the Physical Activity Guidelines for Americans. Group-level bias in AEE:PAQ was small, but large for MVPA:PAQ. Poor within-participant estimates of AEE:PAQ and MVPA:PAQ lead to attenuated relationships with biomarkers and misclassifications of participants who met or who did not meet the Physical Activity Guidelines for Americans.

  9. Bioelectrical impedance analysis to estimate body composition in children and adolescents: a systematic review and evidence appraisal of validity, responsiveness, reliability and measurement error

    NARCIS (Netherlands)

    Talma, H.; Chinapaw, M.J.M.; Bakker, B.; Hirasing, R.A.; Terwee, C.B.; Altenburg, T.M.

    2013-01-01

    Bioelectrical impedance analysis (BIA) is a practical method to estimate percentage body fat (%BF). In this systematic review, we aimed to assess validity, responsiveness, reliability and measurement error of BIA methods in estimating %BF in children and adolescents.We searched for relevant studies

  10. Estimating Error in SRTM Derived Planform of a River in Data-poor Region and Subsequent Impact on Inundation Modeling

    Science.gov (United States)

    Bhuyian, M. N. M.; Kalyanapu, A. J.

    2017-12-01

    Accurate representation of river planform is critical for hydrodynamic modeling. Digital elevation models (DEM) often falls short in accurately representing river planform because they show the ground as it was during data acquisition. But, water bodies (i.e. rivers) change their size and shape over time. River planforms are more dynamic in undisturbed riverine systems (mostly located in data-poor regions) where remote sensing is the most convenient source of data. For many of such regions, Shuttle Radar Topographic Mission (SRTM) is the best available source of DEM. Therefore, the objective of this study is to estimate the error in SRTM derived planform of a river in a data-poor region and estimate the subsequent impact on inundation modeling. Analysis of Landsat image, SRTM DEM and remotely sensed soil data was used to classify the planform activity in an 185 km stretch of the Kushiyara River in Bangladesh. In last 15 years, the river eroded about 4.65 square km and deposited 7.55 square km area. Therefore, current (the year 2017) river planform is significantly different than the SRTM water body data which represents the time of SRTM data acquisition (the year 2000). The rate of planform shifting significantly increased as the river traveled to downstream. Therefore, the study area was divided into three reaches (R1, R2, and R3) from upstream to downstream. Channel slope and meandering ratio changed from 2x10-7 and 1.64 in R1 to 1x10-4 and 1.45 in R3. However, more than 60% erosion-deposition occurred in R3 where a high percentage of Fluvisols (98%) and coarse particles (21%) were present in the vicinity of the river. It indicates errors in SRTM water body data (due to planform shifting) could be correlated with the physical properties (i.e. slope, soil type, meandering ratio etc.) of the riverine system. The correlations would help in zoning activity of a riverine system and determine a timeline to update DEM for a given region. Additionally, to estimate the

  11. State-Space Analysis of Model Error: A Probabilistic Parameter Estimation Framework with Spatial Analysis of Variance

    Science.gov (United States)

    2012-09-30

    atmospheric models and the chaotic growth of initial-condition (IC) error. The aim of our work is to provide new methods that begin to systematically disentangle the model inadequacy signal from the initial condition error signal.

  12. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    Science.gov (United States)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  13. Multi Function Heat Pulse Probes (MFHPP) to Estimate Ground Heat Flux and Reduce Surface Energy Budget Errors

    Science.gov (United States)

    Ciocca, Francesco; Sharma, Varun; Lunati, Ivan; Parlange, Marc B.

    2013-04-01

    Ground heat flux plays a crucial role in surface energy budget: an incorrect estimation of energy storage and heat fluxes in soils occur when probes such as heat flux plates are adopted, and these mistakes can account for up to 90% of the residual variance (Higgins, GRL, 2012). A promising alternative to heat flux plates is represented by Multi Function Heat Pulse Probes (MFHPP). They have proven to be accurate in thermal properties and heat fluxes estimation (e.g. Cobos, VZJ, 2003) and can be used to monitor and quantify subsurface evaporation in field experiments (Xiao et al., VZJ, 2011). We perform a laboratory experiment with controlled temperature in a small Plexiglas column (20cm diameter and 40cm height). The column is packed with homogeneously saturated sandy soil and equipped with three MFHPPs in the upper 4cm and thermocouples and dielectric soil moisture probes deeper. This configuration allows for accurate and simultaneous ground heat flux, soil moisture and subsurface evaporation measurements. Total evaporation is monitored using a precision scale, while an infrared gun and a long wave radiometer measure the soil skin temperature and the outgoing long-short wave radiation, respectively. A fan and a heat lamp placed above the column allow to mimick on a smaller and more controlled scale the field conditions induced by the diurnal cycle. At a reference height above the column relative humidity, wind speed and air temperature are collected. Results are interpreted by means of numerical simulations performed with an ad-hoc-developed numerical model that simulates coupled heat and moisture transfer in soils and is used to match and interpolate the temperature and soil moisture values got at finite depths within the column. Ground heat fluxes are then estimated by integrating over almost continuous, numerically simulated temperature profiles, which avoids errors due to use of discrete data (Lunati et al., WRR, 2012) and leads to a more reliable estimate of

  14. Design of double gate vertical tunnel field effect transistor using HDB and its performance estimation

    Science.gov (United States)

    Seema; Chauhan, Sudakar Singh

    2018-05-01

    In this paper, we demonstrate the double gate vertical tunnel field-effect transistor using homo/hetero dielectric buried oxide (HDB) to obtain the optimized device characteristics. In this concern, the existence of double gate, HDB and electrode work-function engineering enhances DC performance and Analog/RF performance. The use of electrostatic doping helps to achieve higher on-current owing to occurrence of higher tunneling generation rate of charge carriers at the source/epitaxial interface. Further, lightly doped drain region and high- k dielectric below channel and drain region are responsible to suppress the ambipolar current. Simulated results clarifies that proposed device have achieved the tremendous performance in terms of driving current capability, steeper subthreshold slope (SS), drain induced barrier lowering (DIBL), hot carrier effects (HCEs) and high frequency parameters for better device reliability.

  15. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  16. Food photographs in nutritional surveillance: errors in portion size estimation using drawings of bread and photographs of margarine and beverages consumption.

    Science.gov (United States)

    De Keyzer, Willem; Huybrechts, Inge; De Maeyer, Mieke; Ocké, Marga; Slimani, Nadia; van 't Veer, Pieter; De Henauw, Stefaan

    2011-04-01

    Food photographs are widely used as instruments to estimate portion sizes of consumed foods. Several food atlases are available, all developed to be used in a specific context and for a given study population. Frequently, food photographs are adopted for use in other studies with a different context or another study population. In the present study, errors in portion size estimation of bread, margarine on bread and beverages by two-dimensional models used in the context of a Belgian food consumption survey are investigated. A sample of 111 men and women (age 45-65 years) were invited for breakfast; two test groups were created. One group was asked to estimate portion sizes of consumed foods using photographs 1-2 d after consumption, and a second group was asked the same after 4 d. Also, real-time assessment of portion sizes using photographs was performed. At the group level, large overestimation of margarine, acceptable underestimation of bread and only small estimation errors for beverages were found. Women tended to have smaller estimation errors for bread and margarine compared with men, while the opposite was found for beverages. Surprisingly, no major difference in estimation error was found after 4 d compared with 1-2 d. Individual estimation errors were large for all foods. The results from the present study suggest that the use of food photographs for portion size estimation of bread and beverages is acceptable for use in nutrition surveys. For photographs of margarine on bread, further validation using smaller amounts corresponding to actual consumption is recommended.

  17. Bit error rate estimation for galvanic-type intra-body communication using experimental eye-diagram and jitter characteristics.

    Science.gov (United States)

    Li, Jia Wen; Chen, Xi Mei; Pun, Sio Hang; Mak, Peng Un; Gao, Yue Ming; Vai, Mang I; Du, Min

    2013-01-01

    Bit error rate (BER), which indicates the reliability of communicate channel, is one of the most important values in all kinds of communication system, including intra-body communication (IBC). In order to know more about IBC channel, this paper presents a new method of BER estimation for galvanic-type IBC using experimental eye-diagram and jitter characteristics. To lay the foundation for our methodology, the fundamental relationships between eye-diagram, jitter and BER are first reviewed. Then experiments based on human lower arm IBC are carried out using quadrature phase shift keying (QPSK) modulation scheme and 500 KHz carries frequency. In our IBC experiments, the symbol rate is from 10 Ksps to 100 Ksps, with two transmitted power settings, 0 dBm and -5 dBm. Finally, the BER results were obtained after calculation by experimental data through the relationships among eye-diagram, jitter and BER. These results are then compared with theoretical values and they show good agreement, especially when SNR is between 6 dB to 11 dB. Additionally, these results demonstrate assuming the noise of galvanic-type IBC channel as Additive White Gaussian Noise (AWGN) in previous study is applicable.

  18. Anomalous Intraslab Seismicity Beneath Hokkaido, Japan, Estimated From Double-Difference Hypocenter Locations

    Science.gov (United States)

    Kita, S.; Okada, T.; Nakajima, J.; Matsuzawa, T.; Suganomata, J.; Hasegawa, A.; Kirby, S. H.

    2005-12-01

    1. Introduction Generation process of intraslab earthquakes is one of important problems to be solved in seismology. In many subduction zones, intraslab earthquakes at 50-200km depth form the double seismic zone(e.q., Hasegawa et al., 1978). Dehydration embrittlement of the metamorphosed oceanic crust and oceanic mantle may be responsible for the occurrence of events in the double seismic zone(Kirby et al, 1996; Seno and Yamanaka, 1996; Peacock, 2001). Recent some large intraslab earthquakes ruptured off the usual seismic planes (e.g. the 1993 Kushiro-Oki earthquake (Ide and Takeo, 1996); the 2003 Miyagi-Oki earthquake (Sakoda et al., 2004) and the 2001 Geiyo earthquake (Suganomata et al., 2005b)). These large intraslab earthquakes may be caused by the reactivation of large hydrated faults within the subducting slabs. In the focal area of the 2003 Miyagi-Oki earthquake, anomalous seismicity between the upper and lower planes of the double seismic zone also had occurred before the earthquake (Sakoda et al., 2004; Suganomata et al., 2005a). This anomalous seismicity may be also caused by reactivation of the faults that are distributed within the subducting slab. Therefore the seismicity in the regions off the seismic planes may be a clue to understand the cause of intraslab earthquake. In this study we relocated microearthquakes and detect anomalous seismicity within the Pacific plate slab beneath Hokkaido, Japan. 2. Data and method We relocated events (0earthquake catalog at depths of 20-300km for the period from January 2002 to August 2005. Hypocenter parameters and arrival time data in the JMA catalog are used as the initial hypocenters and data for the relocation. We adopted the double-difference hypocenter location method developed by Waldhauser and Ellsworth (2000). 3. Results -Anomalous seismicity beneath Hokkaido- The relocation result is as follow : 1) The result shows some anomalous hypocenters that distributed between the upper and lower planes of the

  19. Study of dosimetry errors in the framework of a concerted international study about the risk of cancer in nuclear industry workers. Study of the errors made on dose estimations of 100 to 3000 keV photons

    International Nuclear Information System (INIS)

    Thierry Chef, I.

    2000-01-01

    Ionizing radiations are uncontested factors of cancer risk and the radioprotection standards are defined on the basis of epidemiological studies of persons exposed to high doses of radiations (atomic bombs and therapeutic medical exposures). An epidemiological study of cancer risk has been carried out on nuclear industry workers from 17 countries in order to check these standards and to directly evaluate the risk linked with long duration exposures to low doses. The techniques used to measure the workers' doses have changed with time and these evolutions have been different in the different countries considered. The study of dosimetry errors aims at estimating the compatibility of the doses with respect to the periods of time and to the countries, and at quantifying the errors that could have disturbed the dose measurements during the first years and their consideration in the risk estimation. A compilation of the information available about dosimetry in the participating countries has been performed and the main sources of errors have been identified. Experiments have been carried out to test the response of the dosimeters used and to evaluate the conditions of exposure inside the companies. The biases and uncertainties have been estimated per company and per period of time and the most important correspond to the oldest measurements performed. This study contributes also to improve the knowledge of the working conditions and of the preciseness of dose estimations. (J.S.)

  20. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  1. Errors in identification using natural markings : Rates, sources, and effects on capture-recapture estimates of abundance

    NARCIS (Netherlands)

    Stevick, PT; Palsboll, PJ; Smith, TD; Bravington, MV; Hammond, PS

    The results of a double-marking experiment using natural markings and microsatellite genetic markers to identify humpback whales (Megaptera novaeangliae) confirm that natural markings are a reliable means of identifying individuals on a large scale. Of 1410 instances of double tagging, there were

  2. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  3. Estimation of partial least squares regression prediction uncertainty when the reference values carry a sizeable measurement error

    NARCIS (Netherlands)

    Fernandez Pierna, J.A.; Lin, L.; Wahl, F.; Faber, N.M.; Massart, D.L.

    2003-01-01

    The prediction uncertainty is studied when using a multivariate partial least squares regression (PLSR) model constructed with reference values that contain a sizeable measurement error. Several approximate expressions for calculating a sample-specific standard error of prediction have been proposed

  4. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  5. A conditional statistical shape model with integrated error estimation of the conditions; application to liver segmentation in non-contrast CT images

    NARCIS (Netherlands)

    Tomoshige, Sho; Oost, Elco; Shimizu, Akinobu; Watanabe, Hidefumi; Nawano, Shigeru

    2014-01-01

    This paper presents a novel conditional statistical shape model in which the condition can be relaxed instead of being treated as a hard constraint. The major contribution of this paper is the integration of an error model that estimates the reliability of the observed conditional features and

  6. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  7. On the precision of an estimator of Mean for Domains in Double ...

    African Journals Online (AJOL)

    The results show that there is a positive contribution to the variance of the estimator which varies from one stratum to another. This addition vanishes where the domain coincides with a stratum. The total sampling variance depends only on components of variance for the domain and is inversely related to the total sample ...

  8. Estimation of Neutral Density in Edge Plasma with Double Null Configuration in EAST

    International Nuclear Information System (INIS)

    Zhang Ling; Xu Guosheng; Ding Siye; Gao Wei; Wu Zhenwei; Chen Yingjie; Huang Juan; Liu Xiaoju; Zang Qing; Chang Jiafeng; Zhang Wei; Li Yingying; Qian Jinping

    2011-01-01

    In this work, population coefficients of hydrogen's n = 3 excited state from the hydrogen collisional-radiative (CR) model, from the data file of DEGAS 2, are used to calculate the photon emissivity coefficients (PECs) of hydrogen Balmer-α (n = 3 → n = 2) (H α ). The results are compared with the PECs from Atomic Data and Analysis Structure (ADAS) database, and a good agreement is found. A magnetic surface-averaged neutral density profile of typical double-null (DN) plasma in EAST is obtained by using FRANTIC, the 1.5-D fluid transport code. It is found that the sum of integral D α and H α emission intensity calculated via the neutral density agrees with the measured results obtained by using the absolutely calibrated multi-channel poloidal photodiode array systems viewing the lower divertor at the last closed flux surface (LCFS). It is revealed that the typical magnetic surface-averaged neutral density at LCFS is about 3.5 x 10 16 m -3 . (magnetically confined plasma)

  9. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  10. Bayesian estimation of a proportion under an asymmetric observation error Estimación bayesiana de una proporción bajo error de estimación asimétrico

    Directory of Open Access Journals (Sweden)

    Juan Carlos Correa Morales

    2012-06-01

    Full Text Available The process of estimating a proportion that is associated with a sensitive question can yield responses that are not necessarily according to the reality.To reduce the probability o false response to this kind of sensitive questions some authors have proposed techniques of randomized response assuming asymmetric observation error. In this paper we present a generalization of the case where a symmetric error is assumed since this assumption could be unrealistic in practice. Under the assumption of an assymetric error the likelihood function is built. By doing this we intend that in practice the final user hasan alternative method to reduce the probability of false response. Assuming informative a priori distributions an expresion for the posterior distribution is found. Since this posterior distribution does not have a closed mathematical expression, it is neccesary to use the Gibbs sampler to carry out the estimation process. This technique is illustrated using real data about drug consumptions that were collected by the Oficina de Bienestar from the Universidad Nacional de Colombia at Medellín.El proceso de estimación de una proporción relacionada con una pregunta que puede ser altamente sensible para el encuestado, puede generar respuestas que no necesariamente coinciden con la realidad. Para reducir la probabilidad de respuestas falsas a este tipo de preguntas algunos autores han propuesto técnicas de respuesta aleatorizada asumiendo un error de observación asimétrico. En este artículo se presenta una generalización al caso donde se asume un error simétrico lo cual puede ser un supuesto poco realista en la práctica. Se deduce la función de verosimilitud bajo el supuesto de error de estimación asimétrico.Con esto se pretende que en la práctica se cuente con un método alternativo para reducir la probabilidad de respuestas falsas. Asumiendo distribuciones a priori informativas se encuentra una expresión para la distribuci

  11. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  12. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  13. Effect of conductivity variations within the electric double layer on the streaming potential estimation in narrow fluidic confinements.

    Science.gov (United States)

    Das, Siddhartha; Chakraborty, Suman

    2010-07-06

    In this article, we investigate the implications of ionic conductivity variations within the electrical double layer (EDL) on the streaming potential estimation in pressure-driven fluidic transport through narrow confinements. Unlike the traditional considerations, we do not affix the ionic conductivities apriori by employing preset values of dimensionless parameters (such as the Dukhin number) to estimate the streaming potential. Rather, utilizing the Gouy-Chapman-Grahame model for estimating the electric potential and charge density distribution within the Stern layer, we first quantify the Stern layer electrical conductivity as a function of the zeta potential and other pertinent parameters quantifying the interaction of the ionic species with the charged surface. Next, by invoking the Boltzmann model for cationic and anionic distribution within the diffuse layer, we obtain the diffuse layer electrical conductivity. On the basis of these two different conductivities pertaining to the two different portions of the EDL as well as the bulk conductivity, we define two separate Dukhin numbers that turn out to be functions of the dimensionless zeta potential and the channel height to Debye length ratio. We derive analytical expressions for the streaming potential as a function of the fundamental governing parameters, considering the above. The results reveal interesting and significant deviations between the streaming potential predictions from the present considerations against the corresponding predictions from the classical considerations in which electrochemically consistent estimates of variable EDL conductivity are not traditionally accounted for. In particular, it is revealed that the variations of streaming potential with zeta potential are primarily determined by the competing effects of EDL electromigration and ionic advection. Over low and high zeta potential regimes, the Stern layer and diffuse layer conductivities predominantly dictate the streaming

  14. Estimating outcomes and cost effectiveness using a single-arm clinical trial: ofatumumab for double-refractory chronic lymphocytic leukemia.

    Science.gov (United States)

    Hatswell, Anthony J; Thompson, Gwilym J; Maroudas, Penny A; Sofrygin, Oleg; Delea, Thomas E

    2017-01-01

    Ofatumumab (Arzerra ® , Novartis) is a treatment for chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab [double refractory (DR-CLL)]. Ofatumumab was licensed on the basis of an uncontrolled Phase II study, Hx-CD20-406, in which patients receiving ofatumumab survived for a median of 13.9 months. However, the lack of an internal control arm presents an obstacle for the estimation of comparative effectiveness. The objective of the study was to present a method to estimate the cost effectiveness of ofatumumab in the treatment of DR-CLL. As no suitable historical control was available for modelling, the outcomes from non-responders to ofatumumab were used to model the effect of best supportive care (BSC). This was done via a Cox regression to control for differences in baseline characteristics between groups. This analysis was included in a partitioned survival model built in Microsoft ® Excel with utilities and costs taken from published sources, with costs and quality-adjusted life years (QALYs) were discounted at a rate of 3.5% per annum. Using the outcomes seen in non-responders, ofatumumab is expected to add approximately 0.62 life years (1.50 vs. 0.88). Using published utility values this translates to an additional 0.30 QALYs (0.77 vs. 0.47). At the list price, ofatumumab had a cost per QALY of £130,563, and a cost per life year of £63,542. The model was sensitive to changes in assumptions regarding overall survival estimates and utility values. This study demonstrates the potential of using data for non-responders to model outcomes for BSC in cost-effectiveness evaluations based on single-arm trials. Further research is needed on the estimation of comparative effectiveness using uncontrolled clinical studies.

  15. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    Science.gov (United States)

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  16. How much can we trust some moment tensors or an attempt of seismic moment error estimation - 2. data reinterpretation, methodology improvement

    Czech Academy of Sciences Publication Activity Database

    Kolář, Petr

    2008-01-01

    Roč. 5, 1 /149/ (2008), s. 31-39 ISSN 1214-9705 R&D Projects: GA AV ČR IAA300120502; GA AV ČR IAA200120701; GA AV ČR(CZ) IAA300120805 Institutional research plan: CEZ:AV0Z30120515 Keywords : seismic moment tensor inversion * error estimation * seismic moment tensor decomposition Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  17. Estimation of Errors: Mathematical Expressions of Temperature, Substrate Concentration and Enzyme Concentration based Formulas for obtaining intermediate values of the Rate of Enzymatic Reaction

    OpenAIRE

    Nizam Uddin

    2013-01-01

    This research paper is based on the estimation of errors in the formulas which are used to obtaining intermediate values of the rate of enzymatic reaction. The rate of enzymatic reaction is affected by concentration of substrate, Temperature, concentration of enzyme and other factors. The rise in Temperature accelerates an Enzyme reaction. At certain Temperature known as the optimum Temperature the activity is maximum. The concentration of substrate is the limiting factor, as the substrate co...

  18. Using marginal structural measurement-error models to estimate the long-term effect of antiretroviral therapy on incident AIDS or death.

    Science.gov (United States)

    Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.

  19. A Novel Degradation Estimation Method for a Hybrid Energy Storage System Consisting of Battery and Double-Layer Capacitor

    Directory of Open Access Journals (Sweden)

    Yuanbin Yu

    2016-01-01

    Full Text Available This paper presents a new method for battery degradation estimation using a power-energy (PE function in a battery/ultracapacitor hybrid energy storage system (HESS, and the integrated optimization which concerns both parameters matching and control for HESS has been done as well. A semiactive topology of HESS with double-layer capacitor (EDLC coupled directly with DC-link is adopted for a hybrid electric city bus (HECB. In the purpose of presenting the quantitative relationship between system parameters and battery serving life, the data during a 37-minute driving cycle has been collected and decomposed into discharging/charging fragments firstly, and then the optimal control strategy which is supposed to maximally use the available EDLC energy is presented to decompose the power between battery and EDLC. Furthermore, based on a battery degradation model, the conversion of power demand by PE function and PE matrix is applied to evaluate the relationship between the available energy stored in HESS and the serving life of battery pack. Therefore, according to the approach which could decouple parameters matching and optimal control of the HESS, the process of battery degradation and its serving life estimation for HESS has been summed up.

  20. Improvement of Parameter Estimations in Tumor Growth Inhibition Models on Xenografted Animals: Handling Sacrifice Censoring and Error Caused by Experimental Measurement on Larger Tumor Sizes.

    Science.gov (United States)

    Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie

    2016-09-01

    The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).

  1. Sources of error inherent in species-tree estimation: impact of mutational and coalescent effects on accuracy and implications for choosing among different methods.

    Science.gov (United States)

    Huang, Huateng; He, Qixin; Kubatko, Laura S; Knowles, L Lacey

    2010-10-01

    Discord in the estimated gene trees among loci can be attributed to both the process of mutation and incomplete lineage sorting. Effectively modeling these two sources of variation--mutational and coalescent variance--provides two distinct challenges for phylogenetic studies. Despite extensive investigation on mutational models for gene-tree estimation over the past two decades and recent attention to modeling of the coalescent process for phylogenetic estimation, the effects of these two variances have yet to be evaluated simultaneously. Here, we partition the effects of mutational and coalescent processes on phylogenetic accuracy by comparing the accuracy of species trees estimated from gene trees (i.e., the actual coalescent genealogies) with that of species trees estimated from estimated gene trees (i.e., trees estimated from nucleotide sequences, which contain both coalescent and mutational variance). Not only is there a significant contribution of both mutational and coalescent variance to errors in species-tree estimates, but the relative magnitude of the effects on the accuracy of species-tree estimation also differs systematically depending on 1) the timing of divergence, 2) the sampling design, and 3) the method used for species-tree estimation. These findings explain why using more information contained in gene trees (e.g., topology and branch lengths as opposed to just topology) does not necessarily translate into pronounced gains in accuracy, highlighting the strengths and limits of different methods for species-tree estimation. Differences in accuracy scores between methods for different sampling regimes also emphasize that it would be a mistake to assume more computationally intensive species-tree estimation procedures that will always provide better estimates of species trees. To the contrary, the performance of a method depends not only on the method per se but also on the compatibilities between the input genetic data and the method as determined

  2. X-ray induced DNA double-strand breakage and rejoining in a radiosensitive human renal carcinoma cell line estimated by CHEF electrophoresis

    Energy Technology Data Exchange (ETDEWEB)

    Wei, K. (Univ. Clinic for Radiotherapy and Radiobiology, Vienna Univ. (Austria) Inst. of Radiation Medicine, Beijing, BJ (China)); Wandl, E. (Univ. Clinic for Radiotherapy and Radiobiology, Vienna Univ. (Austria)); Kaercher, K.H. (Univ. Clinic for Radiotherapy and Radiobiology, Vienna Univ. (Austria))

    1993-12-01

    Cell intrinsic radiosensitivity is of great importance in radiation therapy, but its molecular basis is still uncertain. Since DNA double strand breakage is considered to be the most important lesion related to cell death induced by ionizing radiation, the relationship between DNA double-strand breakage, repair and cell survival was investigated in three cell lines: Chinese hamster cell (CHO-K1), human fibroblast and human renal carcinoma (Tu 25). The D[sub 0] values after X-irradiation were 1.73, 1.23, and 0.89 Gy, respectively, showing that Tu 25 was the most sensitive among them. DNA double-strand breaks were measured by CHEF electrophoresis, the initial yield of double-strand break per dose in the three cell lines was almost the same, and no correlation to cell survival was found. However, the rejoining capacity for DNA double-strand break differed. After a dose of 20 Gy, the repair rate was markedly lower in Tu 25, with a half repair time of 40 min, as compared with the other two cell lines with half repair times of 15 min. The results strongly supported the correlation between the repair capacity for DNA double-strand break and cell survival. It was concluded that DNA repair capacity is one of the determinants of cell radiosensitivity. Estimation of DNA double-strand break rejoining by CHEF was suggested as a predictive assay for radiosensitivity of human tumor cells. (orig.)

  3. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  4. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  5. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content

    Science.gov (United States)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2018-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470

  6. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    Science.gov (United States)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  7. Estimating the Probability of Human Error by Incorporating Component Failure Data from User-Induced Defects in the Development of Complex Electrical Systems.

    Science.gov (United States)

    Majewicz, Peter J; Blessner, Paul; Olson, Bill; Blackburn, Timothy

    2017-04-05

    This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique. © 2017 Society for Risk Analysis.

  8. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  9. Effects of cane length and diameter and judgment type on the constant error ratio for estimated height in blindfolded, visually impaired, and sighted participants.

    Science.gov (United States)

    Huang, Kuo-Chen; Leung, Cherng-Yee; Wang, Hsiu-Feng

    2010-04-01

    The purpose of this study was to assess the ability of blindfolded, visually impaired, and sighted individuals to estimate object height as a function of cane length, cane diameter, and judgment type. 48 undergraduate students (ages 20 to 23 years) were recruited to participate in the study. Participants were divided into low-vision, severely myopic, and normal-vision groups. Five stimulus heights were explored with three cane lengths, varying cane diameters, and judgment types. The participants were asked to estimate the stimulus height with or without reference to a standard block. Results showed that the constant error ratio for estimated height improved with decreasing cane length and comparative judgment. The findings were unclear regarding the effect of cane length on haptic perception of height. Implications were discussed for designing environments, such as stair heights, chairs, the magnitude of apertures, etc., for visually impaired individuals.

  10. The Derivation of the Stability Bound of the Feedback ANC System That Has an Error in the Estimated Secondary Path Model

    Directory of Open Access Journals (Sweden)

    Seong-Pil Moon

    2018-01-01

    Full Text Available This paper investigates the stability problem of the feedback active noise control (ANC system, which can be caused by the modeling error of the electro-acoustic path estimation in its feedback mechanism. A stability analysis method is proposed to obtain the stability bound as a form of a closed-form equation in terms of the delay error length of the secondary path, the ANC filter length, and the primary noise frequency. In the proposed method, the system’s open loop magnitude and phase response equations are separately exploited and approximated within the Nyquist stability criterion. The stability bound of the proposed method is verified by comparing both the original Nyquist stability condition and the simulation results.

  11. Estimating climate model systematic errors in a climate change impact study of the Okavango River basin, southwestern Africa using a mesoscale model

    Science.gov (United States)

    Raghavan, S. V.; Todd, M.

    2007-12-01

    Simulating the impact of future climate variability and change on hydrological systems requires estimates of climate at high spatial resolution compatible with hydrological models. Here we present initial results of a project to simulate future climate over the Okavango River basin and delta in Southwestern Africa. Given the significance of the delta to biodiversity and as a resource to the local population, there is considerable concern regarding the sensitivity of the system to future climate change. An important component of climate variability/change impact studies is an assessment of errors in the modeling suite. Here, we attempt to quantify errors and uncertainties involved in regional climate modelling that will impact on hydrological simulations. The study determines the ability of the MM5 Regional Climate Model to simulate the present day regional climate at the high resolution required by the hydrological models and the effectiveness of the RCM in downscaling GCM outputs to study regional climate change and impacts.

  12. Markov chain beam randomization: a study of the impact of PLANCK beam measurement errors on cosmological parameter estimation

    Science.gov (United States)

    Rocha, G.; Pagano, L.; Górski, K. M.; Huffenberger, K. M.; Lawrence, C. R.; Lange, A. E.

    2010-04-01

    We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, called markov chain beam randomization (MCBR), randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic “nuisance” parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.

  13. Algorithm for Correcting the Keratometric Error in the Estimation of the Corneal Power in Keratoconus Eyes after Accelerated Corneal Collagen Crosslinking

    Directory of Open Access Journals (Sweden)

    David P. Piñero

    2017-01-01

    Full Text Available Purpose. To analyze the errors associated to corneal power calculation using the keratometric approach in keratoconus eyes after accelerated corneal collagen crosslinking (CXL surgery and to obtain a model for the estimation of an adjusted corneal refractive index nkadj minimizing such errors. Methods. Potential differences (ΔPc among keratometric (Pk and Gaussian corneal power (PcGauss were simulated. Three algorithms based on the use of nkadj for the estimation of an adjusted keratometric corneal power (Pkadj were developed. The agreement between Pk1.3375 (keratometric power using the keratometric index of 1.3375, PcGauss, and Pkadj was evaluated. The validity of the algorithm developed was investigated in 21 keratoconus eyes undergoing accelerated CXL. Results. Pk1.3375 overestimated corneal power between 0.3 and 3.2 D in theoretical simulations and between 0.8 and 2.9 D in the clinical study (ΔPc. Three linear equations were defined for nkadj to be used for different ranges of r1c. In the clinical study, differences between Pkadj and PcGauss did not exceed ±0.8 D nk=1.3375. No statistically significant differences were found between Pkadj and PcGauss (p>0.05 and Pk1.3375 and Pkadj (p<0.001. Conclusions. The use of the keratometric approach in keratoconus eyes after accelerated CXL can lead to significant clinical errors. These errors can be minimized with an adjusted keratometric approach.

  14. Towards regional, error-bounded landscape carbon storage estimates for data-deficient areas of the world.

    Science.gov (United States)

    Willcock, Simon; Phillips, Oliver L; Platts, Philip J; Balmford, Andrew; Burgess, Neil D; Lovett, Jon C; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L; Marchant, Rob; Marshall, Andrew R; Mbilinyi, Boniface; Munishi, Pantaleon K T; Owen, Nisha; Swetnam, Ruth D; Topp-Jorgensen, Elmer J; Lewis, Simon L

    2012-01-01

    Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively

  15. Equipment errors: A prevalent cause for fallacy in blood pressure recording - A point prevalence estimate from an indian health university

    Directory of Open Access Journals (Sweden)

    Badrinarayan Mishra

    2013-01-01

    Full Text Available Background: Blood pressure (BP recording is the most commonly measured clinical parameter. Standing mercury sphygmomanometer is the most widely used equipment to record this. However, recording by sphygmomanometer is subject to observer and instrumental error. The different sources of equipment error are faulty manometer tube calibration, baseline deviations and improper arm bladder cuff dimensions. This is further compounded by a high prevalence of arm bladder miss-cuffing in the target population. Objectives: The study was designed to assess the presence of equipment malcalibrations, cuff miss-matching and their effect on BP recording. Materials and Methods: A cross-sectional check of all operational sphygmomanometers in a health university was carried out for the length of the manometer tube, deviation of resting mercury column from "0" level, the width and length of arm bladder cuff and extent of bladder cuff-mismatch with respect to outpatient attending population. Results: From the total of 50 apparatus selected, 39 (78% were from hospital setups and 11 (22% from pre-clinical departments. A manometer height deficit of 13 mm was recorded in 36 (92.23% of the equipment in hospital and 11 (100% from pre-clinical departments. Instruments from both settings showed significant deviation from recommended dimensions in cuff bladder length, width and length to width ratio (P < 0.001. Significant number of apparatus from hospital setups showed presence of mercury manometer baseline deviation either below or above 0 mmHg at the resting state (χ2 = 5.61, D. F. = 1, P = 0.02. Positive corelationship was observed between manometer height deficit, baseline deviation and width of arm cuff bladder (Pearson correlation, P < 0.05. Bladder cuff mismatching in response to the target population was found at 48.52% for males and 36.76% for females. The cumulative effect of these factors can lead to an error in the range of 10-12 mmHg. Conclusion : Faulty

  16. Combining empirical approaches and error modelling to enhance predictive uncertainty estimation in extrapolation for operational flood forecasting. Tests on flood events on the Loire basin, France.

    Science.gov (United States)

    Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles

    2017-04-01

    An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in

  17. Space-borne remote sensing of CO2 by IPDA lidar with heterodyne detection: random error estimation

    Science.gov (United States)

    Matvienko, G. G.; Sukhanov, A. Y.

    2015-11-01

    Possibilities of measuring the CO2 column concentration by spaceborne integrated path differential lidar (IPDA) signals in the near IR absorption bands are investigated. It is shown that coherent detection principles applied in the nearinfrared spectral region promise a high sensitivity for the measurement of the integrated dry air column mixing ratio of the CO2. The simulations indicate that for CO2 the target observational requirements (0.2%) for the relative random error can be met with telescope aperture 0.5 m, detector bandwidth 10 MHz, laser energy per impulse 0.3 mJ and averaging 7500 impulses. It should also be noted that heterodyne technique allows to significantly reduce laser power and receiver overall dimensions compared to direct detection.

  18. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  19. The effects of rectification and Global Positioning System errors on satellite image-based estimates of forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2010-01-01

    Satellite image-based maps of forest attributes are of considerable interest and are used for multiple purposes such as international reporting by countries that have no national forest inventory and small area estimation for all countries. Construction of the maps typically entails, in part, rectifying the satellite images to a geographic coordinate system, observing...

  20. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    Science.gov (United States)

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  1. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    Directory of Open Access Journals (Sweden)

    Thierry Bore

    2016-04-01

    Full Text Available Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  2. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    Directory of Open Access Journals (Sweden)

    M. Choi

    2018-01-01

    Full Text Available The Geostationary Ocean Color Imager (GOCI Yonsei aerosol retrieval (YAER version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2 for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean compared with V1 (0.49 for land and 0.62 for ocean in a validation test against Aerosol Robotic Network (AERONET AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET over China shows similar results. The bias of error (τG − τA is within −0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE, scattering angle, normalized difference vegetation index (NDVI, cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the

  3. GOCI Yonsei aerosol retrieval version 2 aerosol products: improved algorithm description and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, M.; Kim, J.; Lee, J.; KIM, M.; Park, Y. J.; Holben, B. N.; Eck, T. F.; Li, Z.; Song, C. H.

    2017-12-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed for retrieving hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD showed comparable accuracy compared to ground-based and other satellite-based observations, but still had errors due to uncertainties in surface reflectance and simple cloud masking. Also, it was not capable of near-real-time (NRT) processing because it required a monthly database of each year encompassing the day of retrieval for the determination of surface reflectance. This study describes the improvement of GOCI YAER algorithm to the version 2 (V2) for NRT processing with improved accuracy from the modification of cloud masking, surface reflectance determination using multi-year Rayleigh corrected reflectance and wind speed database, and inversion channels per surface conditions. Therefore, the improved GOCI AOD ( ) is similar with those of Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD compared to V1 of the YAER algorithm. The shows reduced median bias and increased ratio within range (i.e. absolute expected error range of MODIS AOD) compared to V1 in the validation results using Aerosol Robotic Network (AERONET) AOD ( ) from 2011 to 2016. The validation using the Sun-Sky Radiometer Observation Network (SONET) over China also shows similar results. The bias of error ( is within -0.1 and 0.1 range as a function of AERONET AOD and AE, scattering angle, NDVI, cloud fraction and homogeneity of retrieved AOD, observation time, month, and year. Also, the diagnostic and prognostic expected error (DEE and PEE, respectively) of are estimated. The estimated multiple PEE of GOCI V2 AOD is well matched with actual error over East Asia, and the GOCI V2 AOD over Korea shows higher ratio within PEE compared to over China and Japan. Hourly AOD products based on the

  4. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    Science.gov (United States)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Holben, Brent; Eck, Thomas F.; Li, Zhengqiang; Song, Chul H.

    2018-01-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT) processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2) for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean) compared with V1 (0.49 for land and 0.62 for ocean) in a validation test against Aerosol Robotic Network (AERONET) AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET) over China shows similar results. The bias of error (τG - τA) is within -0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE), scattering angle, normalized difference vegetation index (NDVI), cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE) of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the GOCI V2 AOD over South

  5. Inherent errors in pollutant build-up estimation in considering urban land use as a lumped parameter.

    Science.gov (United States)

    Liu, An; Goonetilleke, Ashantha; Egodawatta, Prasanna

    2012-01-01

    Stormwater quality modeling results are subject to uncertainty. The variability of input parameters is an important source of overall model error. An in-depth understanding of the variability associated with input parameters can provide knowledge on the uncertainty associated with these parameters and can assist in uncertainty analysis of stormwater quality models and decision making based on modeling outcomes. This paper discusses the outcomes of a research study undertaken to analyze the variability related to pollutant build-up parameters in stormwater quality modeling. The study was based on the analysis of pollutant build-up samples collected from 12 road surfaces in residential, commercial, and industrial land uses. It was found that build-up characteristics vary appreciably even within the same land use. Therefore, using land use as a lumped parameter would contribute significant uncertainties in stormwater quality modeling. Additionally, it was found that the variability in pollutant build-up can be significant depending on the pollutant type. This underlines the importance of taking into account specific land use characteristics and targeted pollutant species when undertaking uncertainty analysis of stormwater quality models or in interpreting the modeling outcomes. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Standard error of measurement of 5 health utility indexes across the range of health for use in estimating reliability and responsiveness.

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G

    2011-01-01

    Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.

  7. Characterization of mixing errors in a coupled physical biogeochemical model of the North Atlantic: implications for nonlinear estimation using Gaussian anamorphosis

    Directory of Open Access Journals (Sweden)

    D. Béal

    2010-02-01

    Full Text Available In biogeochemical models coupled to ocean circulation models, vertical mixing is an important physical process which governs the nutrient supply and the plankton residence in the euphotic layer. However, vertical mixing is often poorly represented in numerical simulations because of approximate parameterizations of sub-grid scale turbulence, wind forcing errors and other mis-represented processes such as restratification by mesoscale eddies. Getting a sufficient knowledge of the nature and structure of these errors is necessary to implement appropriate data assimilation methods and to evaluate if they can be controlled by a given observation system.

    In this paper, Monte Carlo simulations are conducted to study mixing errors induced by approximate wind forcings in a three-dimensional coupled physical-biogeochemical model of the North Atlantic with a 1/4° horizontal resolution. An ensemble forecast involving 200 members is performed during the 1998 spring bloom, by prescribing perturbations of the wind forcing to generate mixing errors. The biogeochemical response is shown to be rather complex because of nonlinearities and threshold effects in the coupled model. The response of the surface phytoplankton depends on the region of interest and is particularly sensitive to the local stratification. In addition, the statistical relationships computed between the various physical and biogeochemical variables reflect the signature of the non-Gaussian behaviour of the system. It is shown that significant information on the ecosystem can be retrieved from observations of chlorophyll concentration or sea surface temperature if a simple nonlinear change of variables (anamorphosis is performed by mapping separately and locally the ensemble percentiles of the distributions of each state variable on the Gaussian percentiles. The results of idealized observational updates (performed with perfect observations and neglecting horizontal correlations

  8. Pursuing atmospheric water vapor retrieval through NDSA measurements between two LEO satellites: evaluation of estimation errors in spectral sensitivity measurements

    Science.gov (United States)

    Facheris, L.; Cuccoli, F.; Argenti, F.

    2008-10-01

    NDSA (Normalized Differential Spectral Absorption) is a novel differential measurement method to estimate the total content of water vapor (IWV, Integrated Water Vapor) along a tropospheric propagation path between two Low Earth Orbit (LEO) satellites. A transmitter onboard the first LEO satellite and a receiver onboard the second one are required. The NDSA approach is based on the simultaneous estimate of the total attenuations at two relatively close frequencies in the Ku/K bands and of a "spectral sensitivity parameter" that can be directly converted into IWV. The spectral sensitivity has the potential to emphasize the water vapor contribution, to cancel out all spectrally flat unwanted contributions and to limit the impairments due to tropospheric scintillation. Based on a previous Monte Carlo simulation approach, through which we analyzed the measurement accuracy of the spectral sensitivity parameter at three different and complementary frequencies, in this work we examine such accuracy for a particularly critical atmospheric status as simulated through the pressure, temperature and water vapor profiles measured by a high resolution radiosonde. We confirm the validity of an approximate expression of the accuracy and discuss the problems that may arise when tropospheric water vapor concentration is lower than expected.

  9. Transition Models with Measurement Errors

    OpenAIRE

    Magnac, Thierry; Visser, Michael

    1999-01-01

    In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...

  10. Doubling the spectrum of time-domain induced polarization: removal of non-linear self-potential drift, harmonic noise and spikes, tapered gating, and uncertainty estimation

    DEFF Research Database (Denmark)

    Olsson, Per-Ivar; Fiandaca, Gianluca; Larsen, Jakob Juul

    , a logarithmic gate width distribution for optimizing IP data quality and an estimate of gating uncertainty. Additional steps include modelling and cancelling of non-linear background drift and harmonic noise and a technique for efficiently identifying and removing spikes. The cancelling of non-linear background....... In total, this processing scheme achieves almost four decades in time and thus doubles the available spectral information content of the IP responses compared to the traditional processing....

  11. Sampling errors associated with soil composites used to estimate mean Ra-226 concentrations at an UMTRA remedial-action site

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.

    1987-07-01

    The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs

  12. Rain gauge - radar rainfall reanalysis of operational and research data in the Cévennes-Vivarais region, France, estimation error analysis over a wide range of scales.

    Science.gov (United States)

    Wijbrans, Annette; Delrieu, Guy; Nord, Guillaume; Boudevillain, Brice; Berne, Alexis; Grazioli, Jacopo; Confoland, Audrey

    2014-05-01

    In the Cévennes -Vivarais region in France, flash-flood events can occur due to high intensity precipitation events. These events are described in a detailed quantitative precipitation estimates, to be able to better characterize the hydrological response to these rain events in a number of small-scale nested watersheds (window, as well as a research network, in the same region on a window of 15x30 km. The radar and rain gauge data of the operational network are collected from three organisms (Météo-France, Service de Prévision des Crues du Grand Delta and EdF/DTG). The research network contains high resolution data are from research rainfall observation systems deployed within the Enhanced Observation Period (autumn 2012-2015) of the HyMeX project (www.hymex.org). This project aims at studying the hydrological cycle in the Mediterranean with emphases on the hydro-meteorological extremes and their evolution in the coming decades. Rain gauge radar merging is performed using a kriging with external drift (KED) technique, and compared to the ordinary kriging (OK) of the rain gauges and the radar products on the same time scale using a cross-validation technique. Also a method is applied to quantify kriging estimation variances for both kriging techniques at the two spatial scales, in order to analyse the error characteristics of the interpolation methods at a scale range of 0.1 - 100 km² and 0.2 - 12 h. The combined information of the reanalysis of the data of the operational network and the research network gives a view on the error structure of rainfall estimations over several orders of magnitudes in spatial scale. This allows understanding of the error structure of these rain events, their relation to availability of data, and gives insight in the added value of detailed rainfall data on the understanding of the rainfall structure on very small, 'missing', scales (smaller than 1km2 and 1 hour time steps).

  13. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  14. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    Science.gov (United States)

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A Generic Simulation Approach for the Fast and Accurate Estimation of the Outage Probability of Single Hop and Multihop FSO Links Subject to Generalized Pointing Errors

    KAUST Repository

    Ben Issaid, Chaouki

    2017-07-28

    When assessing the performance of the free space optical (FSO) communication systems, the outage probability encountered is generally very small, and thereby the use of nave Monte Carlo simulations becomes prohibitively expensive. To estimate these rare event probabilities, we propose in this work an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results. In fact, we consider a variety of turbulence regimes, and we investigate the outage probability of FSO communication systems, under a generalized pointing error model based on the Beckmann distribution, for both single and multihop scenarios. Selected numerical simulations are presented to show the accuracy and the efficiency of our approach compared to naive Monte Carlo.

  16. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  17. Estimate of the Distribution of Solids Within Mixed Hanford Double-Shell Tank AZ-101: Implications for AY-102

    International Nuclear Information System (INIS)

    Wells, Beric E.; Ressler, Jennifer J.

    2009-01-01

    This paper describes the current level of understanding of the suspension of solids in Hanford double-shell waste tanks while being mixed with the baseline configuration of two 300-horsepower mixer pumps. A mixer pump test conducted in Tank AZ-101 during fiscal year 2000 provided the basis for this understanding. Information gaps must be filled to demonstrate the capability of the baseline feed delivery system to effectively mix, sample, and deliver double-shell tank waste to the Hanford Tank Waste Treatment and Immobilization Plant (WTP) for vitrification

  18. A voxel-based technique to estimate volume and volumetric error of terrestrial photogrammetry-derived digital terrain models (DTM) of topographic depressions

    Science.gov (United States)

    Székely, Balázs; Raveloson, Andrea; Rasztovits, Sascha; Molnár, Gábor; Dorninger, Peter

    2013-04-01

    It is a common task in geoscience to determine the volume of a topographic depression (e.g., a valley, a crater, a gully, etc.) based on a digital terrain model (DTM). In case of DTMs based on laser scanned data this task can be fulfilled with a relatively high accuracy. However, if the DTM is generated using terrestrial photogrammetric methods, the limitations of the technology often makes geodetically inaccurate/biased models at forested or purely visible areas or if the landform has an ill-posed geometry (e.g. it is elongated). In these cases the inaccuracies may hamper the generation of a proper DTM. On the other hand if we are interested rather in the determination of the volume of the feature with a certain accuracy or we intend to carry out an order of magnitude volumetric estimation, a DTM having larger inaccuracies is tolerable. In this case the volume calculation can be still done by setting realistic assumptions about the errors of the DTM. In our approach two DTMs are generated to create top and bottom envelope surfaces that confine the "true" but unknown DTM. The varying accuracy of the photogrammetric DTM is considered via the varying deviation of these two surfaces: at problematic corners of the feature the deviation of the two surfaces will be larger, whereas at well-renderable domains the deviation of the surfaces remain minimal. Since such topographic depressions may have a complicated geometry, the error-prone areas may complicate the geometry of the aforementioned envelopes even more. The proper calculation of the volume may turn to be difficult. To reduce this difficulty, a voxel-based approach is used. The volumetric error is calculated based on the gridded envelopes using an appropriate voxel resolution. The method is applied for gully features termed lavakas existing in large numbers in Madagascar. These landforms are typically characterised by a complex shape, steep walls, they are often elongated, and have internal crests. All these

  19. Test-retest reproducibility of elbow goniometric measurements in a rigid double-blinded protocol: intervals for distinguishing between measurement error and clinical change.

    Science.gov (United States)

    Cleffken, Berry; van Breukelen, Gerard; van Mameren, Henk; Brink, Peter; Olde Damink, Steven

    2007-01-01

    Increasingly, goniometry of elbow motion is used for qualification of research results. Expression of reliability is in parameters not suitable for comparison of results. We modified Bland and Altman's method, resulting in the smallest detectable differences (SDDs). Two raters measured elbow excursions in 42 individuals (144 ratings per test person) with an electronic digital inclinometer in a classical test-retest crossover study design. The SDDs were 0 +/- 4.2 degrees for active extension; 0 +/- 8.2 degrees for active flexion, both without upper arm fixation; 0 +/- 6.3 degrees for active extension; 0 +/- 5.7 degrees for active flexion; 0 +/- 7.4 degrees for passive flexion with upper arm fixation; 0 +/- 10.1 degrees for active flexion with upper arm retroflexion; and 0 +/- 8.5 degrees and 0 +/- 10.8 degrees for active and passive range of motion. Differences smaller than these SDDs found in clinical or research settings are attributable to measurement error and do not indicate improvement.

  20. Double Chooz

    Energy Technology Data Exchange (ETDEWEB)

    Buck, Christian [Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany)

    2006-05-15

    The goal of the Double Chooz reactor neutrino experiment is to search for the neutrino mixing parameter {theta}{sub 13}. Double Chooz will use two identical detectors at 150 m and 1.05 km distance from the reactor cores. The near detector is used to monitor the reactor {nu}-bar {sub e} flux while the second is dedicated to the search for a deviation from the expected (1/distance){sup 2} behavior. This two detector concept will allow a relative normalization systematic error of ca. 0.6 %. The expected sensitivity for sin{sup 2}2{theta}{sub 13} is then in the range 0.02 - 0.03 after three years of data taking. The antineutrinos will be detected in a liquid scintillator through the capture on protons followed by a gamma cascade, produced by the neutron capture on Gd.