Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
Energy Technology Data Exchange (ETDEWEB)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold; Hayes, James C.; McIntyre, Justin I.
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...
Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.
Bonate, Peter L
2013-01-01
Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
Irwin, Peter L; Nguyen, Ly-Huong T; Chen, Chin-Yi
2010-09-01
For any analytical system the population mean (μ) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (σ = square root(μ)). For a preponderance of analytical methods, σ is very small relative to μ due to their large limit of detection (>10(2) per volume). However, in theory at least, DNA-based detection methods (real-time, quantitative or qPCR) can detect ≈ 1 DNA molecule per tested volume (i.e., μ ≈ 1) whereupon errors of random sampling can cause sample means (mean) to substantially deviate from μ if the number of samplings (n), or "technical replicates", per observation is too small. In this work the behaviors of two measures of sampling error (each replicated fivefold) are examined under the influence of n. For all data (μ = 1.25, 2.5, 5, 7.5, 10, and 20) a large sample of individual analytical counts (x) were created and randomly assigned into N integral-valued sub-samples each containing between 2 and 50 repeats (n) whereupon N × n = 322 to 361. From these data the average μ-normalized deviation of σ from each sub-sample's standard deviation estimate (s(j), j = 1 to N; N = 7 [n = 50] to 180 [n = 2]) was calculated (Δ). Alternatively, the average μ-normalized deviation of μ from each sub-sample's mean estimate (mean(j)) was also evaluated (Δ'). It was found that both of these empirical measures of sampling error were proportional to ⁻²√n . μ. Derivative (∂/∂n · Δ or Δ') analyses of our results indicate that a large number of samplings (n ≈ 33 +/- 3.1) are requisite to achieve a nominal sampling error for samples with a μ ≈ 1. This result argues that pathogen detection is most economically performed, even using highly sensitive techniques such as qPCR, when some form of organism cultural enrichment is utilized and which results in a binomial response. Thus, using a specific gene PCR-based (+ or -) most probable number (MPN
Adjoint Error Estimation for Linear Advection
Energy Technology Data Exchange (ETDEWEB)
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Model error estimation in ensemble data assimilation
Directory of Open Access Journals (Sweden)
S. Gillijns
2007-01-01
Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Wind power error estimation in resource assessments.
Directory of Open Access Journals (Sweden)
Osvaldo Rodríguez
Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Error Estimates of Theoretical Models: a Guide
Dobaczewski, J; Reinhard, P -G
2014-01-01
This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.
A generalization error estimate for nonlinear systems
DEFF Research Database (Denmark)
Larsen, Jan
1992-01-01
models of linear and simple neural network systems. Within the linear system GEN is compared to the final prediction error criterion and the leave-one-out cross-validation technique. It was found that the GEN estimate of the true generalization error is less biased on the average. It is concluded...
Error estimation and adaptivity for incompressible hyperelasticity
Whiteley, J.P.
2014-04-30
SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.
Error estimation in plant growth analysis
Directory of Open Access Journals (Sweden)
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
Error estimation and adaptive chemical transport modeling
Directory of Open Access Journals (Sweden)
Malte Braack
2014-09-01
Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.
Error estimation in the direct state tomography
Sainz, I.; Klimov, A. B.
2016-10-01
We show that reformulating the Direct State Tomography (DST) protocol in terms of projections into a set of non-orthogonal bases one can perform an accuracy analysis of DST in a similar way as in the standard projection-based reconstruction schemes, i.e., in terms of the Hilbert-Schmidt distance between estimated and true states. This allows us to determine the estimation error for any measurement strength, including the weak measurement case, and to obtain an explicit analytic form for the average minimum square errors.
EXPLICIT ERROR ESTIMATE FOR THE NONCONFORMING WILSON'S ELEMENT
Institute of Scientific and Technical Information of China (English)
Jikun ZHAO; Shaochun CHEN
2013-01-01
In this article,we study the explicit expressions of the constants in the error estimate of the nonconforming finite element method.We explicitly obtain the approximation error estimate and the consistency error estimate for the Wilson's element without the regular assumption,respectively,which implies the final finite element error estimate.Such explicit a priori error estimates can be used as computable error bounds.
Current error estimates for LISA spurious accelerations
Energy Technology Data Exchange (ETDEWEB)
Stebbins, R T [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bender, P L [JILA-University of Colorado, Boulder, CO (United States); Hanson, J [Stanford University, Stanford, CA (United States); Hoyle, C D [University of Trento, Trento (Italy); Schumaker, B L [Jet Propulsion Laboratory, Pasadena, CA (United States); Vitale, S [University of Trento, Trento (Italy)
2004-03-07
The performance of the LISA gravitational wave detector depends critically on limiting spurious accelerations of the fiducial masses. Consequently, the requirements on allowable acceleration levels must be carefully allocated based on estimates of the achievable limits on spurious accelerations from all disturbances. Changes in the allocation of requirements are being considered, and are proposed here. The total spurious acceleration error requirement would remain unchanged, but a few new error sources would be added, and the allocations for some specific error sources would be changed. In support of the recommended revisions in the requirements budget, estimates of plausible acceleration levels for 17 of the main error sources are discussed. In most cases, the formula for calculating the size of the effect is known, but there may be questions about the values of various parameters to use in the estimates. Different possible parameter values have been discussed, and a representative set is presented. Improvements in our knowledge of the various experimental parameters will come from planned experimental and modelling studies, supported by further theoretical work.
Tolerance for error and computational estimation ability.
Hogan, Thomas P; Wyckoff, Laurie A; Krebs, Paul; Jones, William; Fitzgerald, Mark P
2004-06-01
Previous investigators have suggested that the personality variable tolerance for error is related to success in computational estimation. However, this suggestion has not been tested directly. This study examined the relationship between performance on a computational estimation test and scores on the NEO-Five Factor Inventory, a measure of the Big Five personality traits, including Openness, an index of tolerance for ambiguity. Other variables included SAT-I Verbal and Mathematics scores and self-rated mathematics ability. Participants were 65 college students. There was no significant relationship between the tolerance variable and computational estimation performance. There was a modest negative relationship between Agreeableness and estimation performance. The skepticism associated with the negative pole of the Agreeableness dimension may be important to pursue in further understanding of estimation ability.
Hierarchical Boltzmann simulations and model error estimation
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Systematic Error Estimation for Chemical Reaction Energies
Simm, Gregor N
2016-01-01
For the theoretical understanding of the reactivity of complex chemical systems accurate relative energies between intermediates and transition states are required. Despite its popularity, density functional theory (DFT) often fails to provide sufficiently accurate data, especially for molecules containing transition metals. Due to the huge number of intermediates that need to be studied for all but the simplest chemical processes, DFT is to date the only method that is computationally feasible. Here, we present a Bayesian framework for DFT that allows for error estimation of calculated properties. Since the optimal choice of parameters in present-day density functionals is strongly system dependent, we advocate for a system-focused re-parameterization. While, at first sight, this approach conflicts with the first-principles character of DFT that should make it in principle system independent, we deliberately introduce system dependence because we can then assign a stochastically meaningful error to the syste...
A MATHEMATICAL APPROACH TO ESTIMATE THE ERROR
Directory of Open Access Journals (Sweden)
Thomas MELCHER
2016-06-01
Full Text Available Engineering based calculation procedures in fire safety science often consist of unknown or uncertain input data which are to be estimated by the engineer using appropriate and plausible assumptions. Thereby, errors in this data are induced in the calculation and thus, impact the number as well as the reliability of the results. In this paper a procedure is presented to directly quantify and consider unknown input properties in the process of calculation using distribution functions and Monte-Carlo Simulations. A sensitivity analysis reveals the properties which have a major impact on the calculation reliability. Furthermore, the results are compared to the numerical models of CFAST and FDS.
Factoring Algebraic Error for Relative Pose Estimation
Energy Technology Data Exchange (ETDEWEB)
Lindstrom, P; Duchaineau, M
2009-03-09
We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.
GOMOS data characterization and error estimation
Directory of Open Access Journals (Sweden)
J. Tamminen
2010-03-01
Full Text Available The Global Ozone Monitoring by Occultation of Stars (GOMOS instrument uses stellar occultation technique for monitoring ozone and other trace gases in the stratosphere and mesosphere. The self-calibrating measurement principle of GOMOS together with a relatively simple data retrieval where only minimal use of a priori data is required, provides excellent possibilities for long term monitoring of atmospheric composition.
GOMOS uses about 180 brightest stars as the light source. Depending on the individual spectral characteristics of the stars, the signal-to-noise ratio of GOMOS is changing from star to star, resulting also varying accuracy to the retrieved profiles. We present the overview of the GOMOS data characterization and error estimation, including modeling errors, for ozone, NO_{2}, NO_{3} and aerosol profiles. The retrieval error (precision of the night time measurements in the stratosphere is typically 0.5–4% for ozone, about 10–20% for NO_{2}, 20–40% for NO_{3} and 2–50% for aerosols. Mesospheric O_{3}, up to 100 km, can be measured with 2–10% precision. The main sources of the modeling error are the incompletely corrected atmospheric turbulence causing scintillation, inaccurate aerosol modeling, uncertainties in cross sections of the trace gases and in the atmospheric temperature. The sampling resolution of GOMOS varies depending on the measurement geometry. In the data inversion a Tikhonov-type regularization with pre-defined target resolution requirement is applied leading to 2–3 km resolution for ozone and 4 km resolution for other trace gases.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Floating-Point Numbers with Error Estimates (revised)
Masotti, Glauco
2012-01-01
The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a hardwired processor incorporat...
Acoustic estimation of suspended sediment concentration
Institute of Scientific and Technical Information of China (English)
朱维庆; 朱敏; 周忠来; 潘锋; 霍其增; 张向军
2001-01-01
In this paper, the acoustic estimation of suspended sediment concentration is discussed and two estimation methods of suspended sediment concentration are presented. The first method is curve fitting method, in which, according to the acoustic backscattering theory we assume that the fit-ting factor K1 (r) between the concentration M(r) obtained by acoustic observation and the concentra-tion M0(r) obtained by sampling water is a high order power function of distance r. Using least-square algorithm, we can determine the coefficients of the high order power function by minimizing the differ-ence between M(r) and M0(r) in the whole water profile. To the absorption coefficient of sound due to the suspension in water we do not give constraint in the first method. The second method is recur-sive fitting method, in which we take M0(r) as the conditions of initialization and decision and give ra-tional constraints to some parameters. The recursive process is stable. We analyzed the two methods with a lot of experimental data. The analytical results show that the estimate error of the first method is less than that of the second method and the latter can not only estimate the concentration of suspended sediment but also give the absorption coefficient of sound. Good results have been obtained with the two methods.
Error estimate for Doo-Sabin surfaces
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Based on a general bound on the distance error between a uniform Doo-Sabin surface and its control polyhedron, an exponential error bound independent of the subdivision process is presented in this paper. Using the exponential bound, one can predict the depth of recursive subdivision of the Doo-Sabin surface within any user-specified error tolerance.
A hardware error estimate for floating-point computations
Lang, Tomás; Bruguera, Javier D.
2008-08-01
We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However
Estimating IMU heading error from SAR images.
Energy Technology Data Exchange (ETDEWEB)
Doerry, Armin Walter
2009-03-01
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Prediction and simulation errors in parameter estimation for nonlinear systems
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
Decentralized estimation of sensor systematic error andtarget state vector
Institute of Scientific and Technical Information of China (English)
贺明科; 王正明; 朱炬波
2003-01-01
An accurate estimation of the sensor systematic error is significant for improving the performance of target tracking system. The existing methods usually append the bias states directly to the variable states to form augmented state vectors and utilize the conventional Kalman estimator to achieve state vectors estimate. So doing is expensive in computation, and much work is devoted to decoupling variable states and systematic error. But the decentralied estimation of systematic errors and reduction of the amount of computation as well as decentralied track fusion are far from being realized. This paper addresses distributed track fusion problem in multi-sensor tracking system in the presence of sensor bias. By this method, variable states and systematic error is decoupled. Decentralized systematic error estimation and track fusion are achieved. Simulation results verify that this method can get accurate estimation of systematic error and state vector.
ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL
Institute of Scientific and Technical Information of China (English)
CUI Hengjian
2005-01-01
This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
Boundary Integral Equations and A Posteriori Error Estimates
Institute of Scientific and Technical Information of China (English)
YU Dehao; ZHAO Longhua
2005-01-01
Adaptive methods have been rapidly developed and applied in many fields of scientific and engineering computing. Reliable and efficient a posteriori error estimates play key roles for both adaptive finite element and boundary element methods. The aim of this paper is to develop a posteriori error estimates for boundary element methods. The standard a posteriori error estimates for boundary element methods are obtained from the classical boundary integral equations. This paper presents hyper-singular a posteriori error estimates based on the hyper-singular integral equations. Three kinds of residuals are used as the estimates for boundary element errors. The theoretical analysis and numerical examples show that the hyper-singular residuals are good a posteriori error indicators in many adaptive boundary element computations.
Reliable estimation of orbit errors in spaceborne SAR interferometry
Bähr, H.; Hanssen, R.F.
2012-01-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of
Estimation in the polynomial errors-in-variables model
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.
EXPLICIT ERROR ESTIMATES FOR MIXED AND NONCONFORMING FINITE ELEMENTS
Institute of Scientific and Technical Information of China (English)
Shipeng Mao; Zhong-Ci Shi
2009-01-01
In this paper, we study the explicit expressions of the constants in the error estimates of the lowest order mixed and nonconforming finite element methods. We start with an ex-plicit relation between the error constant of the lowest order Raviart-Thomas interpolation error and the geometric characters of the triangle. This gives an explicit error constant of the lowest order mixed finite element method. Furthermore, similar results can be ex-tended to the nonconforming P1 scheme based on its close connection with the lowest order Raviart-Thomas method. Meanwhile, such explicit a priori error estimates can be used as computable error bounds, which are also consistent with the maximal angle condition for the optimal error estimates of mixed and nonconforming finite element methods.Mathematics subject classification: 65N12, 65N15, 65N30, 65N50.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Directory of Open Access Journals (Sweden)
Xiao-Feng Wang
2011-03-01
Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.
Estimating soil zinc concentrations using reflectance spectroscopy
Sun, Weichao; Zhang, Xia
2017-06-01
Soil contamination by heavy metals has been an increasingly severe threat to nature environment and human health. Efficiently investigation of contamination status is essential to soil protection and remediation. Visible and near-infrared reflectance spectroscopy (VNIRS) has been regarded as an alternative for monitoring soil contamination by heavy metals. Generally, the entire VNIR spectral bands are employed to estimate heavy metal concentration, which lacks interpretability and requires much calculation. In this study, 74 soil samples were collected from Hunan Province, China and their reflectance spectra were used to estimate zinc (Zn) concentration in soil. Organic matter and clay minerals have strong adsorption for Zn in soil. Spectral bands associated with organic matter and clay minerals were used for estimation with genetic algorithm based partial least square regression (GA-PLSR). The entire VNIR spectral bands, the bands associated with organic matter and the bands associated with clay minerals were incorporated as comparisons. Root mean square error of prediction, residual prediction deviation, and coefficient of determination (R2) for the model developed using combined bands of organic matter and clay minerals were 329.65 mg kg-1, 1.96 and 0.73, which is better than 341.88 mg kg-1, 1.89 and 0.71 for the entire VNIR spectral bands, 492.65 mg kg-1, 1.31 and 0.40 for the organic matter, and 430.26 mg kg-1, 1.50 and 0.54 for the clay minerals. Additionally, in consideration of atmospheric water vapor absorption in field spectra measurement, combined bands of organic matter and absorption around 2200 nm were used for estimation and achieved high prediction accuracy with R2 reached 0.640. The results indicate huge potential of soil reflectance spectroscopy in estimating Zn concentrations in soil.
Error Estimation for Indoor 802.11 Location Fingerprinting
DEFF Research Database (Denmark)
Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene
2009-01-01
that is inherent to 802.11-based positioning systems can be estimated. Knowing the position error is crucial for many applications that rely on position information: End users could be informed about the estimated position error to avoid frustration in case the system gives faulty position information. Service...
Sampling errors of quantile estimations from finite samples of data
Roy, Philippe; Gachon, Philippe
2016-01-01
Empirical relationships are derived for the expected sampling error of quantile estimations using Monte Carlo experiments for two frequency distributions frequently encountered in climate sciences. The relationships found are expressed as a scaling factor times the standard error of the mean; these give a quick tool to estimate the uncertainty of quantiles for a given finite sample size.
Backward-gazing method for measuring solar concentrators shape errors.
Coquand, Mathieu; Henault, François; Caliot, Cyril
2017-03-01
This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.
Bias in parameter estimation of form errors
Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min
2014-09-01
The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Bayesian ensemble approach to error estimation of interatomic potentials
DEFF Research Database (Denmark)
Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.;
2004-01-01
Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...... of the actual errors for the potentials....
A-posteriori error estimation for second order mechanical systems
Institute of Scientific and Technical Information of China (English)
Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard
2012-01-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
MPDATA error estimator for mesh adaptivity
Szmelter, Joanna; Smolarkiewicz, Piotr K.
2006-04-01
In multidimensional positive definite advection transport algorithm (MPDATA) the leading error as well as the first- and second-order solutions are known explicitly by design. This property is employed to construct refinement indicators for mesh adaptivity. Recent progress with the edge-based formulation of MPDATA facilitates the use of the method in an unstructured-mesh environment. In particular, the edge-based data structure allows for flow solvers to operate on arbitrary hybrid meshes, thereby lending itself to implementations of various mesh adaptivity techniques. A novel unstructured-mesh nonoscillatory forward-in-time (NFT) solver for compressible Euler equations is used to illustrate the benefits of adaptive remeshing as well as mesh movement and enrichment for the efficacy of MPDATA-based flow solvers. Validation against benchmark test cases demonstrates robustness and accuracy of the approach.
Unbiased bootstrap error estimation for linear discriminant analysis.
Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R
2014-12-01
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.
Ranking translations using error analysis and quality estimation
Fishel, Mark
2013-01-01
We describe TerrorCat, a submission to this year’s metrics shared task. It is a machine learning-based metric that is trained on manual ranking data from WMT shared tasks 2008–2012. Input features are generated by applying automatic translation error analysis to the translation hypotheses and calculating the error category frequency differences. We additionally experiment with adding quality estimation features in addition to the error analysis-based ones. When evaluated against WMT’2012 rank...
Approaches to relativistic positioning around Earth and error estimations
Puchades, Neus
2016-01-01
In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The appl...
Estimate of error bounds in the improved support vector regression
Institute of Scientific and Technical Information of China (English)
SUN Yanfeng; LIANG Yanchun; WU Chunguo; YANG Xiaowei; LEE Heow Pueh; LIN Wu Zhong
2004-01-01
An estimate of a generalization error bound of the improved support vector regression(SVR)is provided based on our previous work.The boundedness of the error of the improved SVR is proved when the algorithm is applied to the function approximation.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W [Albuquerque, NM; Jordan, Jay D [Albuquerque, NM; Kim, Theodore J [Albuquerque, NM
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Electromagnetic simulations for salinity index error estimation
Wilczek, Andrzej; Szypłowska, Agnieszka; Kafarski, Marcin; Nakonieczna, Anna; Skierucha, Wojciech
2017-01-01
Soil salinity index (SI) is a measure of salt concentration in soil water. The salinity index is calculated as a partial derivative of the soil bulk electrical conductivity (EC) with respect to the bulk dielectric permittivity (DP). The paper focused on the impact of different sensitivity zones for measured both EC and DP on the salinity index determination accuracy. For this purpose, a set of finite difference time domain (FDTD) simulations was prepared. The simulations were carried out on the model of a reflectometric probe consisting of three parallel rods inserted into a modelled material of simulated DP and EC. The combinations of stratified distributions of DP and EC were tested. An experimental verification of the simulation results on selected cases was performed. The results showed that the electromagnetic simulations can provide useful data to improve accuracy of the determination of soil SI.
Small-Sample Error Estimation for Bagged Classification Rules
Vu, T. T.; Braga-Neto, U. M.
2010-12-01
Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.
Small-Sample Error Estimation for Bagged Classification Rules
Directory of Open Access Journals (Sweden)
Vu TT
2010-01-01
Full Text Available Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...... for molecules and solids. Fluctuations within the ensemble can then be used to estimate errors relative to experiment on calculated quantities such as binding energies, bond lengths, and vibrational frequencies. It is demonstrated that the error bars on energy differences may vary by orders of magnitude...
Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.
Orloff, K L; Snyder, P K
1982-01-15
Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Multiadaptive Galerkin Methods for ODEs III: A Priori Error Estimates
Logg, Anders
2012-01-01
The multiadaptive continuous/discontinuous Galerkin methods mcG(q) and mdG(q) for the numerical solution of initial value problems for ordinary differential equations are based on piecewise polynomial approximation of degree q on partitions in time with time steps which may vary for different components of the computed solution. In this paper, we prove general order a priori error estimates for the mcG(q) and mdG(q) methods. To prove the error estimates, we represent the error in terms of a discrete dual solution and the residual of an interpolant of the exact solution. The estimates then follow from interpolation estimates, together with stability estimates for the discrete dual solution.
National Research Council Canada - National Science Library
Shiraishi, Hiroshi
2010-01-01
.... Based on this we construct an estimator of the lower tail of the estimation error. Moreover, we introduce the Estimation Error Efficient Portfolio which considers the estimation error as the portfolio risk...
Adaptive Error Estimation in Linearized Ocean General Circulation Models
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
A novel TOA estimation method with effective NLOS error reduction
Institute of Scientific and Technical Information of China (English)
ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping
2008-01-01
It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.
Estimating the Count Error in the Australian Census
Directory of Open Access Journals (Sweden)
Chipperfield James
2017-03-01
Full Text Available In many countries, counts of people are a key factor in the allocation of government resources. However, it is well known that errors arise in Census counting of people (e.g., undercoverage due to missing people. Therefore, it is common for national statistical agencies to conduct one or more “audit” surveys that are designed to estimate and remove systematic errors in Census counting. For example, the Australian Bureau of Statistics (ABS conducts a single audit sample, called the Post Enumeration Survey (PES, shortly after each Australian Population Census. This article describes the estimator used by the ABS to estimate the count of people in Australia. Key features of this estimator are that it is unbiased when there is systematic measurement error in Census counting and when nonresponse to the PES is nonignorable.
Sampling errors in satellite estimates of tropical rain
Mcconnell, Alan; North, Gerald R.
1987-01-01
The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
Directory of Open Access Journals (Sweden)
E. Chan
2015-08-01
Full Text Available Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB, Saskatchewan (SK and Ontario (ON. Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC simulation and cost function minimization (CFM methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM method are -7 and -3 % (0 and 8 % respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 % resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %. This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion
CME Velocity and Acceleration Error Estimates Using the Bootstrap Method
Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji
2017-08-01
The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.
Error-space estimate method for generalized synergic target tracking
Institute of Scientific and Technical Information of China (English)
Ming CEN; Chengyu FU; Ke CHEN; Xingfa LIU
2009-01-01
To improve the tracking accuracy and stability of an optic-electronic target tracking system,the concept of generalized synergic target and an algorithm named error-space estimate method is presented.In this algo-rithm,the motion of target is described by guide data and guide errors,and then the maneuver of the target is separated into guide data and guide errors to reduce the maneuver level.Then state estimate is implemented in target state-space and error-space respectively,and the prediction data of target position are acquired by synthe-sizing the filtering data from target state-space according to kinematic model and the prediction data from error-space according to guide error model.Differing from typ-ical multi-model method,the kinematic and guide error models work concurrently rather than switch between models.Experiment results show that the performance of the algorithm is better than Kalman filter and strong tracking filter at the same maneuver level.
Error estimation and adaptivity in Navier-Stokes incompressible flows
Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.
1990-07-01
An adaptive remeshing procedure for solving Navier-Stokes incompressible fluid flow problems is presented in this paper. This procedure has been implemented using the error estimator developed by Zienkiewicz and Zhu (1987, 1989) and a semi-implicit time-marching scheme for Navier-Stokes flow problems (Zienkiewicz et al. 1990). Numerical examples are presented, showing that the error estimation and adaptive procedure are capable of monitoring the flow field, updating the mesh when necessary, and providing nearly optimal meshes throughout the calculation, thus making the solution reliable and the computation economical and efficient.
A TYPE OF NEW POSTERIORI ERROR ESTIMATORS FOR STOKES PROBLEMS
Institute of Scientific and Technical Information of China (English)
罗振东; 王烈衡; 李雅如
2001-01-01
In this paper, a new discrete formulation and a type of new posteriori error estimators for the second-order element discretization for Stokes problems are presented, where pressure is approximated with piecewise first-degree polynomials and velocity vector field with piecewise seconddegree polynomials with a cubic bubble function to be added. The estimators are the globally upper and locally lower bounds for the error of the finite element discretization. It is shown that the bubble part for this second-order element approximation is substituted for the other parts of the approximate solution.
Minimum Mean Square Error Estimation Under Gaussian Mixture Statistics
Flam, John T; Kansanen, Kimmo; Ekman, Torbjorn
2011-01-01
This paper investigates the minimum mean square error (MMSE) estimation of x, given the observation y = Hx+n, when x and n are independent and Gaussian Mixture (GM) distributed. The introduction of GM distributions, represents a generalization of the more familiar and simpler Gaussian signal and Gaussian noise instance. We present the necessary theoretical foundation and derive the MMSE estimator for x in a closed form. Furthermore, we provide upper and lower bounds for its mean square error (MSE). These bounds are validated through Monte Carlo simulations.
Hybrid estimation technique for predicting butene concentration in polyethylene reactor
Mohd Ali, Jarinah; Hussain, M. A.
2016-03-01
A component of artificial intelligence (AI), which is fuzzy logic, is combined with the so-called conventional sliding mode observer (SMO) to establish a hybrid type estimator to predict the butene concentration in the polyethylene production reactor. Butene or co-monomer concentration is another significant parameter in the polymerization process since it will affect the molecular weight distribution of the polymer produced. The hybrid estimator offers straightforward formulation of SMO and its combination with the fuzzy logic rules. The error resulted from the SMO estimation will be manipulated using the fuzzy rules to enhance the performance, thus improved on the convergence rate. This hybrid estimation is able to estimate the butene concentration satisfactorily despite the present of noise in the process.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Influence of measurement errors and estimated parameters on combustion diagnosis
Energy Technology Data Exchange (ETDEWEB)
Payri, F.; Molina, S.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n. 46022 Valencia (Spain); Armas, O. [Departamento de Mecanica Aplicada e Ingenieria de proyectos, Universidad de Castilla-La Mancha. Av. Camilo Jose Cela s/n 13071,Ciudad Real (Spain)
2006-02-01
Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors. (author)
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Error Estimation for the Linearized Auto-Localization Algorithm
Directory of Open Access Journals (Sweden)
Fernando Seco
2012-02-01
Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.
Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure
2013-09-01
High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.
On the error estimate for cubature on Wiener space
Cass, Thomas
2011-01-01
It was pointed out in Crisan, Ghazali [2] that the error estimate for the cubature on Wiener space algorithm developed in Lyons, Victoir [11] requires an additional assumption on the drift. In this note we demonstrate that it is straightforward to adopt the analysis of Kusuoka [7] to obtain a general estimate without an additional assumptions on the drift. In the process we slightly sharpen the bounds derived in [7].
A precise error bound for quantum phase estimation.
Directory of Open Access Journals (Sweden)
James M Chappell
Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.
Error estimates in horocycle averages asymptotics: challenges from string theory
Cardella, M.A.
2010-01-01
For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth
On global error estimation and control for initial value problems
J. Lang; J.G. Verwer
2007-01-01
Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach ba
On global error estimation and control for initial value problems
Lang, J.; Verwer, J.G.
2007-01-01
Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach
Biosensor Arrays for Estimating Molecular Concentration in Fluid Flows
Abolfath-Beygi, Maryam
2011-01-01
This paper constructs dynamical models and estimation algorithms for the concentration of target molecules in a fluid flow using an array of novel biosensors. Each biosensor is constructed out of protein molecules embedded in a synthetic cell membrane. The concentration evolves according to an advection-diffusion partial differential equation which is coupled with chemical reaction equations on the biosensor surface. By using averaging theory methods and the divergence theorem, an approximate model is constructed that describes the asymptotic behaviour of the concentration as a system of ordinary differential equations. The estimate of target molecules is then obtained by solving a nonlinear least squares problem. It is shown that the estimator is strongly consistent and asymptotically normal. An explicit expression is obtained for the asymptotic variance of the estimation error. As an example, the results are illustrated for a novel biosensor built out of protein molecules.
Optimizing Neural Network Architectures Using Generalization Error Estimators
DEFF Research Database (Denmark)
Larsen, Jan
1994-01-01
This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...
Optimizing Neural Network Architectures Using Generalization Error Estimators
DEFF Research Database (Denmark)
Larsen, Jan
1994-01-01
This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...
Errors in estimating volume increments of forest trees
Directory of Open Access Journals (Sweden)
Magnani F
2014-02-01
Full Text Available Errors in estimating volume increments of forest trees. Periodic tree and stand increments are often estimated retrospectively from measurements of diameter and height growth of standing trees, through the application of various simplifications of the general formula for volume increment rates. In particular, the Hellrigl method and its various formulations have been often suggested in Italy. Like other retrospective approaches, the Hellrigl method is affected by a systematic error, resulting from the assumption as a reference term of conditions at one of the extremes of the period considered. The magnitude of the error introduced by different formulations has been assessed in the present study through their application to mensurational and increment measurements from the detailed growth analysis of 107 Picea abies trees. Results are compared with those obtained with a new equation, which makes reference to the interval mid-point. The newly proposed method makes it possible to drastically reduce the error in the estimate of periodic tree increments, and especially its systematic component. This appears of particular relevance for stand- and national level applications.
ESTIMATING ERROR BOUNDS FOR TERNARY SUBDIVISION CURVES/SURFACES
Institute of Scientific and Technical Information of China (English)
Ghulam Mustafa; Jiansong Deng
2007-01-01
We estimate error bounds between ternary subdivision curves/surfaces and their control polygons after k-fold subdivision in terms of the maximal differences of the initial control point sequences and constants that depend on the subdivision mask. The bound is independent of the process of subdivision and can be evaluated without recursive subdivision.Our technique is independent of parametrization therefore it can be easily and efficiently implemented. This is useful and important for pre-computing the error bounds of subdivision curves/surfaces in advance in many engineering applications such as surface/surface intersection, mesh generation, NC machining, surface rendering and so on.
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Institute of Scientific and Technical Information of China (English)
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Error bounds for surface area estimators based on Crofton's formula
DEFF Research Database (Denmark)
Kiderlen, Markus; Meschenmoser, Daniel
2009-01-01
and the mean is approximated by a finite weighted sum S(A) of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule), and show that the relative error...... in the sense that the relative error of the surface area estimator is very close to the minimal error.......According to Crofton’s formula, the surface area S(A) of a sufficiently regular compact set A in R^d is proportional to the mean of all total projections pA (u) on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u) is only measured in k directions...
Error Estimation for Moments Analysis in Heavy Ion Collision Experiment
Luo, Xiaofeng
2011-01-01
Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in heavy ion collision experiment. As the higher moments analysis is statistics hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on Delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-08-21
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
Stress Recovery and Error Estimation for Shell Structures
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Directory of Open Access Journals (Sweden)
Byung-Hyun Lee
2015-08-01
Full Text Available Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Regularization and error estimates for nonhomogeneous backward heat problems
Directory of Open Access Journals (Sweden)
Duc Trong Dang
2006-01-01
Full Text Available In this article, we study the inverse time problem for the non-homogeneous heat equation which is a severely ill-posed problem. We regularize this problem using the quasi-reversibility method and then obtain error estimates on the approximate solutions. Solutions are calculated by the contraction principle and shown in numerical experiments. We obtain also rates of convergence to the exact solution.
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
Huang, Weidong
2011-01-01
Surface slope error of concentrator is one of the main factors to influence the performance of the solar concentrated collectors which cause deviation of reflected ray and reduce the intercepted radiation. This paper presents the general equation to calculate the standard deviation of reflected ray error from slope error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 5 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope error is transferred to the reflected ray in more than 2 folds when the incidence angle is more than 0. The equation for reflected ray error is generally fit for all reflection surfaces, and can also be applied to control the error in designing an abaxial optical system.
Zhanshan Wang; Longhu Quan; Xiuchong Liu
2014-01-01
The control of a high performance alternative current (AC) motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI) controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM). In order to guarantee the accuracy of rot...
Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco
2014-06-11
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
BIAS ERRORS INDUCED BY CONCENTRATION GRADIENT IN SEDIMENT-LADEN FLOW MEASUREMENT WITH PTV
Institute of Scientific and Technical Information of China (English)
LI Dan-xun; LIN Qiu-sheng; ZHONG Qiang; WANG Xing-kui
2012-01-01
Sediment-laden flow measurement with Particle Tracking Velocimetry (PTV) introduces a series of finite-sized sampling bins along the vertical of the flow.Instantaneous velocities are collected at each bin and a significantly large sample is established to evaluate mean and root mean square (rms) velocities of the flow.Due to the presence of concentration gradient,the established sample for the solid phase inv(o)lves more data from the lower part of the sampling bin than from the upper part.The concentration effect causes bias errors in the measured mean and rms velocities when velocity varies across the bin.These bias errors are analytically quantified in this study based on simplified linear velocity and concentration distributions.Typical bulk flow characteristics from sediment-laden flow measurements are used to demonstrate rough estimation of the error magnitude.Results indicate that the mean velocity is underestimated while the rms velocity is overestimated in the ensemble-averaged measurement.The extent of deviation is commensurate with the bin size and the rate of concentration gradient.Procedures are proposed to assist determining an appropriate sampling bin size in certain error limits.
Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation
Directory of Open Access Journals (Sweden)
Namyong Kim
2016-06-01
Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.
Estimation in the polynomial errors-in-variables model
Institute of Scientific and Technical Information of China (English)
ZHANG; Sanguo
2002-01-01
［1］Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.［2］Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.［3］Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.［4］Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.［5］Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.［6］Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.［7］Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.
Error estimation and adaptivity for transport problems with uncertain parameters
Sahni, Onkar; Li, Jason; Oberai, Assad
2016-11-01
Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).
Erasing errors due to alignment ambiguity when estimating positive selection.
Redelings, Benjamin
2014-08-01
Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments.
ERROR ESTIMATES FOR THE TIME DISCRETIZATION FOR NONLINEAR MAXWELL'S EQUATIONS
Institute of Scientific and Technical Information of China (English)
Marián Slodi(c)ka; Ján Bu(s)a Jr.
2008-01-01
This paper is devoted to the study of a nonlinear evolution eddy current model of the type (б)tB(H) +▽×(▽×H) = 0 subject to homogeneous Dirichlet boundary conditions H×v = 0 and a given initial datum. Here, the magnetic properties of a soft ferromagnet are linked by a nonlinear material law described by B(H). We apply the backward Euler method for the time discretization and we derive the error estimates in suitable function spaces. The results depend on the nonlinearity of B(H).
Real-Time Parameter Estimation Using Output Error
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Biases in atmospheric CO2 estimates from correlated meteorology modeling errors
Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.
2015-03-01
Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is
CO2 flux estimation errors associated with moist atmospheric processes
Directory of Open Access Journals (Sweden)
S. Pawson
2012-04-01
Full Text Available Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between moist transport, satellite CO2 retrievals, and source/sink inversion has not yet been established. Here we examine the effect of moist processes on (1 synoptic CO2 transport by Version-4 and Version-5 NASA Goddard Earth Observing System Data Assimilation System (NASA-DAS meteorological analyses, and (2 source/sink inversion. We find that synoptic transport processes, such as fronts and dry/moist conveyors, feed off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to continental scale source/sink estimation errors of up to 0.25 PgC yr−1 in northern mid-latitudes. Second, moist processes are represented differently in GEOS-4 and GEOS-5, leading to differences in vertical CO2 gradients, moist poleward and dry equatorward CO2 transport, and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified, causing source/sink estimation errors of up to 0.55 PgC yr−1 in northern mid-latitudes. These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES
Institute of Scientific and Technical Information of China (English)
Yun-qing Huang; Wei Li; Fang Su
2006-01-01
In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.
LOCAL A PRIORI AND A POSTERIORI ERROR ESTIMATE OF TQC9 ELEMENT FOR THE BIHARMONIC EQUATION
Institute of Scientific and Technical Information of China (English)
Ming Wang; Weimeng Zhang
2008-01-01
In this paper,local a priori,local a posteriori and global a posteriori error estimates are obtained for TQC9 element for the biharmonic equation.An adaptive algorithm is given based on the a posteriori error estimates.
Estimating atmospheric mercury concentrations with lichens.
Vannini, Andrea; Nicolardi, Valentina; Bargagli, Roberto; Loppi, Stefano
2014-01-01
The uptake kinetics of elemental gaseous Hg (Hg(0)) in three species of epiphytic lichens (Pseudevernia furfuracea, Evernia prunastri, and Xanthoria parietina) were investigated under four different Hg concentrations (10, 15, 30, and 45 μg/m(3)) and three different temperatures (10, 20, and 30 °C) with the aim of evaluating the lichen efficiency for Hg(0) accumulation and their potential use in the estimate of atmospheric concentrations of this metal in the field. The results showed that under our experimental conditions the lichens accumulated Hg according to exposure time and that the metal is not released back to the atmosphere after Hg(0) was removed from the air (clearance). Pseudevernia furfuracea showed the highest Hg accumulation capacity and Evernia prunastri showed the lowest, but in these species the metal uptake kinetics was affected by temperature. Xanthoria parietina showed an intermediate metal accumulation capacity and a Hg accumulation rate independent of temperature (in the range 10-30 °C). The use of first-order kinetics equations for Hg uptake in X. parietina and available field data on Hg bioaccumulation in this species allowed reliable estimates of atmospheric Hg concentrations in the environment.
Transition State Theory: Variational Formulation, Dynamical Corrections, and Error Estimates
vanden-Eijnden, Eric
2009-03-01
Transition state theory (TST) is discussed from an original viewpoint: it is shown how to compute exactly the mean frequency of transition between two predefined sets which either partition phase space (as in TST) or are taken to be well separate metastable sets corresponding to long-lived conformation states (as necessary to obtain the actual transition rate constants between these states). Exact and approximate criterions for the optimal TST dividing surface with minimum recrossing rate are derived. Some issues about the definition and meaning of the free energy in the context of TST are also discussed. Finally precise error estimates for the numerical procedure to evaluate the transmission coefficient κS of the TST dividing surface are given, and it shown that the relative error on κS scales as 1/√κS when κS is small. This implies that dynamical corrections to the TST rate constant can be computed efficiently if and only if the TST dividing surface has a transmission coefficient κS which is not too small. In particular the TST dividing surface must be optimized upon (for otherwise κS is generally very small), but this may not be sufficient to make the procedure numerically efficient (because the optimal dividing surface has maximum κS, but this coefficient may still be very small).
Institute of Scientific and Technical Information of China (English)
LEE Tien-hsu; WANG Jong-tzy; CHEN Jhih-bin; CHANG Pao-chi
2006-01-01
Although H.264 video coding standard provides several error resilience tools, the damage caused by error propagation may still be tremendous. This work is aimed at developing a robust and standard-compliant error resilient coding scheme for H.264and uses techniques of mode decision, data hiding, and error concealment to reduce the damage from error propagation. This paper proposes a system with two error resilience techniques that can improve the robustness of H.264 in noisy channels. The first technique is Nearest Neighbor motion compensated Error Concealment (NNEC) that chooses the nearest neighbors in the reference frames for error concealment. The second technique is Distortion Estimated Mode Decision (DEMD) that selects an optimal mode based on stochastically distorted frames. Observed simulation results showed that the rate-distortion performances of the proposed algorithms are better than those of the compared algorithms.
Error estimates for the Skyrme-Hartree-Fock model
Erler, J
2014-01-01
There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.
Sampling errors in rainfall estimates by multiple satellites
North, Gerald R.; Shen, Samuel S. P.; Upson, Robert
1993-01-01
This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.
Discontinuous Galerkin error estimation for linear symmetric hyperbolic systems
Adjerid, Slimane; Weinhart, Thomas
2009-01-01
In this manuscript we present an error analysis for the discontinuous Galerkin discretization error of multi-dimensional first-order linear symmetric hyperbolic systems of partial differential equations. We perform a local error analysis by writing the local error as a series and showing that its le
Directory of Open Access Journals (Sweden)
Zhanshan Wang
2014-01-01
Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.
Estimating Canopy Nitrogen Concentration in Sugarcane Using Field Imaging Spectroscopy
Directory of Open Access Journals (Sweden)
Marc Souris
2012-06-01
Full Text Available The retrieval of nutrient concentration in sugarcane through hyperspectral remote sensing is widely known to be affected by canopy architecture. The goal of this research was to develop an estimation model that could explain the nitrogen variations in sugarcane with combined cultivars. Reflectance spectra were measured over the sugarcane canopy using a field spectroradiometer. The models were calibrated by a vegetation index and multiple linear regression. The original reflectance was transformed into a First-Derivative Spectrum (FDS and two absorption features. The results indicated that the sensitive spectral wavelengths for quantifying nitrogen content existed mainly in the visible, red edge and far near-infrared regions of the electromagnetic spectrum. Normalized Differential Index (NDI based on FDS_{(750/700} and Ratio Spectral Index (RVI based on FDS_{(724/700} are best suited for characterizing the nitrogen concentration. The modified estimation model, generated by the Stepwise Multiple Linear Regression (SMLR technique from FDS centered at 410, 426, 720, 754, and 1,216 nm, yielded the highest correlation coefficient value of 0.86 and Root Mean Square Error of the Estimate (RMSE value of 0.033%N (n = 90 with nitrogen concentration in sugarcane. The results of this research demonstrated that the estimation model developed by SMLR yielded a higher correlation coefficient with nitrogen content than the model computed by narrow vegetation indices. The strong correlation between measured and estimated nitrogen concentration indicated that the methods proposed in this study could be used for the reliable diagnosis of nitrogen quantity in sugarcane. Finally, the success of the field spectroscopy used for estimating the nutrient quality of sugarcane allowed an additional experiment using the polar orbiting hyperspectral data for the timely determination of crop nutrient status in rangelands without any requirement of prior
Detecting Positioning Errors and Estimating Correct Positions by Moving Window.
Song, Ha Yoon; Lee, Jun Seok
2015-01-01
In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research.
Adaptive error covariances estimation methods for ensemble Kalman filters
Energy Technology Data Exchange (ETDEWEB)
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Pollution concentration estimates in ecologically important zones
Energy Technology Data Exchange (ETDEWEB)
Skiba, Y.N. [Mexico City Univ. (Mexico). Center for Atmospheric Sciences
1995-12-31
Method based on using the pollutant transport equation and the adjoint technique is described here for estimating the pollutant concentration level in ecologically important zones. The method directly relates the pollution level in such zones with the power of the pollution sources and the initial pollution field. Assuming that the wind or current velocities are known (from climatic data or dynamic model), the main and adjoint pollutant transport equations can be considered in a limited area to solve such theoretically and practically important problems as: (1) optimal location of new industries in a given region with the aim to minimize the pollution concentration in certain ecologically important zones, (2) optimization of emissions from operating industries, (3) detection of the plants violating sanitary regulations, (4) analysis of the emissions coming from the vehicle traffic (such emissions can be included in the model by means of the linear pollution sources located along the main roadways), (5) estimation of the oil pollution in various ecologically important oceanic (sea) zones in case of accident with the oil tanker, (6) evaluation of the sea water desalination level in estuary regions, and others. These equations considered in a spherical shell domain can also be applied to the problems of transporting the pollutants from a huge industrial complex, or from the zone of an ecological catastrophe similar to the Chernobyl one
CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes
Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.
2012-01-01
Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Energy Technology Data Exchange (ETDEWEB)
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
Directory of Open Access Journals (Sweden)
S. Vathsal
1994-01-01
Full Text Available This paper provides an error model of the strapped down inertial navigation system in the state space format. A method to estimate the circular error probability is presented using time propagation of error covariance matrix. Numerical results have been obtained for a typical flight trajectory. Sensitivity studies have also been conducted for variation of sensor noise covariances and initial state uncertainty. This methodology seems to work in all the practical cases considered so far. Software has been tested for both the local vertical frame and the inertial frame. The covariance propagation technique provides accurate estimation of dispersions of position at impact. This in turn enables to estimate the circular error probability (CEP very accurately.
Chang, Howard H; Peng, Roger D; Dominici, Francesca
2011-10-01
In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.
Mannervik, B; Jakobson, I; Warholm, M
1986-01-01
Optimal design of experiments as well as proper analysis of data are dependent on knowledge of the experimental error. A detailed analysis of the error structure of kinetic data obtained with acetylcholinesterase showed conclusively that the classical assumptions of constant absolute or constant relative error are inadequate for the dependent variable (velocity). The best mathematical models for the experimental error involved the substrate and inhibitor concentrations and reflected the rate law for the initial velocity. Data obtained with other enzymes displayed similar relationships between experimental error and the independent variables. The new empirical error functions were shown superior to previously used models when utilized in weighted non-linear-regression analysis of kinetic data. The results suggest that, in the spectrophotometric assays used in the present study, the observed experimental variance is primarily due to errors in determination of the concentrations of substrate and inhibitor and not to error in measuring the velocity. PMID:3753447
MOTION ERROR ESTIMATION OF5-AXIS MACHINING CENTER USING DBB METHOD
Institute of Scientific and Technical Information of China (English)
CHEN Huawei; ZHANG Dawei; TIAN Yanling; ICHIRO Hagiwara
2006-01-01
In order to estimate the motion errors of 5-axis machine center, the double ball bar (DBB)method is adopted to realize the diagnosis procedure. The motion error sources of rotary axes in 5-axis machining center comprise of the alignment error of rotary axes and the angular error due to various factors, e.g. the inclination of rotary axes. From sensitive viewpoints, each motion error is possible to have a particular sensitive direction in which deviation of DBB error trace arises from only some specific error sources. The model of the DBB error trace is established according to the spatial geometry theory. Accordingly, the sensitive direction of each motion error source is made clear through numerical simulation, which is used as the reference patterns for rotational error estimation.The estimation method is proposed to easily estimate the motion error sources of rotary axes in quantitative manner. To verify the proposed DBB method for rotational error estimation, the experimental tests are carried out on a 5-axis machining center M-400 (MORISEIKI). The effect of the mismatch of the DBB is also studied to guarantee the estimation accuracy. From the experimental data, it is noted that the proposed estimation method for 5-axis machining center is feasible and effective.
Linnet, K
1990-12-01
The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.
Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors
Institute of Scientific and Technical Information of China (English)
Jun FAN
2012-01-01
In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.
DEFF Research Database (Denmark)
Poulsen, Per Rugaard; Cho, Byungchul; Keall, Paul
2010-01-01
. The mathematical formalism of the method includes an individualized measure of the position estimation error in terms of an estimated 1D Gaussian distribution for the unresolved target position[2]. The present study investigates how well this 1D Gaussian predicts the actual distribution of position estimation....... This finding indicates that individualized root-mean-square errors and 95% confidence intervals can be applied reliably to the estimated target trajectories....
Energy Technology Data Exchange (ETDEWEB)
Ju, Lili; Tian, Li; Wang, Desheng
2008-10-31
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
Institute of Scientific and Technical Information of China (English)
Ningning YAN; Zhaojie ZHOU
2008-01-01
In this paper,we study a posteriori error estimates of the edge stabilization Galerkin method for the constrained optimal control problem governed by convection-dominated diffusion equations.The residual-type a posteriori error estimators yield both upper and lower bounds for control u measured in L2-norm and for state y and costate p measured in energy norm.Two numerical examples are presented to illustrate the effectiveness of the error estimators provided in this paper.
Kukush, A.; Markovsky, I.; Van Huffel, S.
2002-01-01
Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.
Aerial measurement error with a dot planimeter: Some experimental estimates
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
On the Performance of Principal Component Liu-Type Estimator under the Mean Square Error Criterion
Directory of Open Access Journals (Sweden)
Jibo Wu
2013-01-01
Full Text Available Wu (2013 proposed an estimator, principal component Liu-type estimator, to overcome multicollinearity. This estimator is a general estimator which includes ordinary least squares estimator, principal component regression estimator, ridge estimator, Liu estimator, Liu-type estimator, r-k class estimator, and r-d class estimator. In this paper, firstly we use a new method to propose the principal component Liu-type estimator; then we study the superior of the new estimator by using the scalar mean squares error criterion. Finally, we give a numerical example to show the theoretical results.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Analysis of the possible measurement errors for the PM10 concentration measurement at Gosan, Korea
Shin, S.; Kim, Y.; Jung, C.
2010-12-01
The reliability of the measurement of ambient trace species is an important issue, especially, in a background area such as Gosan in Jeju Island, Korea. In a previous episodic study in Gosan (NIER, 2006), it was found that the measured PM10 concentration by the β-ray absorption method (BAM) was higher than the gravimetric method (GMM) and the correlation between them was low. Based on the previous studies (Chang et al., 2001; Katsuyuki et al., 2008) two probable reasons for the discrepancy are identified; (1) negative measurement error by the evaporation of volatile ambient species at the filter in GMM such as nitrate, chloride, and ammonium and (2) positive error by the absorption of water vapor during measurement in BAM. There was no heater at the inlet of BAM in Gosan during the sampling period. In this study, we have analyzed negative and positive error quantitatively by using a gas/particle equilibrium model SCAPE (Simulating Composition of Atmospheric Particles at Equilibrium) for the data between May 2001 and June 2008 with the aerosol and gaseous composition data. We have estimated the degree of the evaporation at the filter in GMM by comparing the volatile ionic species concentration calculated by SCAPE at thermodynamic equilibrium state under the meteorological conditions during the sampling period and mass concentration measured by ion chromatography. Also, based on the aerosol water content calculated by SCAPE, We have estimated quantitatively the effect of ambient humidity during measurement in BAM. Subsequently, this study shows whether the discrepancy can be explained by some other factors by applying multiple regression analyses. References Chang, C. T., Tsai, C. J., Lee, C. T., Chang, S. Y., Cheng, M. T., Chein, H. M., 2001, Differences in PM10 concentrations measured by β-gauge monitor and hi-vol sampler, Atmospheric Environment, 35, 5741-5748. Katsuyuki, T. K., Hiroaki, M. R., and Kazuhiko, S. K., 2008, Examination of discrepancies between beta
Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error
Tang, H.; Dubayah, R.
2010-12-01
Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).
Energy Technology Data Exchange (ETDEWEB)
Kim, J. H. [Kyunghee Univ., Yongin (Korea, Republic of); Lee, W. K.; Jang, S. Y. [KAERI, Taejon (Korea, Republic of)
2002-10-01
Theoretical equation to estimate the indoor radon concentration can be derived from the multi-regression analysis method for the relationship between the indoor radon concentration and the meteorological variables such as the temperature, pressure and pressure difference at same indoor. The result of multi-regression analysis showed that the indoor radon concentration has been mostly influenced by the variation of the indoor temperature, while not so much influence by the indoor pressure difference. The indoor radon concentration theoretically estimated in this study agreed well with that really measured at same indoor within the range of statistical error. Therefore, it is possible to estimate the indoor rabon concentration by using the theoretical equation adopting the temperature and pressure difference at the same indoor.
The Asymptotic Standard Errors of Some Estimates of Uncertainty in the Two-Way Contingency Table
Brown, Morton B.
1975-01-01
Estimates of conditional uncertainty, contingent uncertainty, and normed modifications of contingent uncertainity have been proposed for the two-way contingency table. The asymptotic standard errors of the estimates are derived. (Author)
Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter
2014-03-01
Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Winham, Stacey J; Motsinger-Reif, Alison A
2011-01-01
The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
A Bayesian Estimator for Linear Calibration Error Effects in Thermal Remote Sensing
Morgan, J A
2005-01-01
The Bayesian Land Surface Temperature estimator previously developed has been extended to include the effects of imperfectly known gain and offset calibration errors. It is possible to treat both gain and offset as nuisance parameters and, by integrating over an uninformative range for their magnitudes, eliminate the dependence of surface temperature and emissivity estimates upon the exact calibration error.
A POSTERIORI ERROR ESTIMATES IN ADINI FINITE ELEMENT FOR EIGENVALUE PROBLEMS
Institute of Scientific and Technical Information of China (English)
Yi-du Yang
2000-01-01
In this paper, we discuss a posteriori error estimates of the eigenvalue λh given by Adini nonconforming finite element. We give an assymptotically exact error estimator of the λh. We prove that the order of convergence of the λh is just 2and the λh converge from below for sufficiently small h.
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Multi-satellite rainfall sampling error estimates – a comparative study
Directory of Open Access Journals (Sweden)
A. Loew
2012-10-01
Full Text Available This study focus is set on quantifying sampling related uncertainty in the satellite rainfall estimates. We conduct observing system simulation experiment to estimate sampling error for various constellations of Low-Earth orbiting and geostationary satellites. There are two types of microwave instruments currently available: cross track sounders and conical scanners. We evaluate the differences in sampling uncertainty for various satellite constellations that carry instruments of the common type as well as in combination with geostationary observations. A precise orbital model is used to simulate realistic satellite overpasses with orbital shifts taken into account. With this model we resampled rain gauge timeseries to simulate satellites rainfall estimates free of retrieval and calibration errors. We concentrate on two regions, Germany and Benin, areas with different precipitation regimes. Our results show that sampling uncertainty for all satellite constellations does not differ greatly depending on the area despite the differences in local precipitation patterns. Addition of 3 hourly geostationary observations provides equal performance improvement in Germany and Benin, reducing rainfall undersampling by 20–25% of the total rainfall amount. Authors do not find a significant difference in rainfall sampling between conical imager and cross-track sounders.
An Unbiased Estimator of Peculiar Velocity with Gaussian Distributed Errors for Precision Cosmology
Watkins, Richard
2014-01-01
We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and that have errors that are Gaussian distributed, thus meeting the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogs of galaxy distances and, using the fact that peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions, The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogs, particularly those that are comp...
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Subpixel centroid estimation is the most important star image location method of star tracker. This paper presents a theoretical analysis of the systematic error of subpixel centroid estimation algorithm utilizing frequency domain analysis under the con-sideration of sampling frequency limitation and sampling window limitation. Explicit expression of systematic error of cen-troid estimation is obtained, and the dependence of systematic error on Gaussian width of star image, actual star centroid loca-tion and the number of sampling pixels is derived. A systematic error compensation algorithm for star centroid estimation is proposed based on the result of theoretical analysis. Simulation results show that after compensation, the residual systematic errors of 3-pixel-and 5-pixel-windows’ centroid estimation are less than 2×10-3 pixels and 2×10-4 pixels respectively.
Fast estimation of discretization error for FE problems solved by domain decomposition
Parret-Fréaud, Augustin; Gosselet, Pierre; Feyel, Frédéric; 10.1016/j.cma.2010.07.002
2012-01-01
This paper presents a strategy for a posteriori error estimation for substructured problems solved by non-overlapping domain decomposition methods. We focus on global estimates of the discretization error obtained through the error in constitutive relation for linear mechanical problems. Our method allows to compute error estimate in a fully parallel way for both primal (BDD) and dual (FETI) approaches of non-overlapping domain decomposition whatever the state (converged or not) of the associated iterative solver. Results obtained on an academic problem show that the strategy we propose is efficient in the sense that correct estimation is obtained with fully parallel computations; they also indicate that the estimation of the discretization error reaches sufficient precision in very few iterations of the domain decomposition solver, which enables to consider highly effective adaptive computational strategies.
Glaser, E M; Wilson, P D
1998-11-01
The optical fractionator is a design-based two-stage systematic sampling method that is used to estimate the number of cells in a specified region of an organ when the population is too large to count exhaustively. The fractionator counts the cells found in optical disectors that have been systematically sampled in serial sections. Heretofore, evaluations of optical fractionator performance have been made by performing tests on actual tissue sections, but it is difficult to evaluate the coefficient of error (CE), i.e. the precision of a population size estimate, by using biological tissue samples because they do not permit a comparison of an estimated CE with the true CE. However, computer simulation does permit making such comparisons while avoiding the observational biases inherent in working with biological tissue. This study is the first instance in which computer simulation has been applied to population size estimation by the optical fractionator. We used computer simulation to evaluate the performance of three CE estimators. The estimated CEs were evaluated in tests of three types of non-random cell population distribution and one random cell population distribution. The non-random population distributions varied by differences in 'intensity', i.e. the expected cell counts per disector, according to both section and disector location within the section. Two distributions were sinusoidal and one was linearly increasing; in all three there was a six-fold difference between the high and low intensities. The sinusoidal distributions produced either a peak or a depression of cell intensity at the centre of the simulated region. The linear cell intensity gradually increased from the beginning to the end of the region that contained the cells. The random population distribution had a constant intensity over the region. A 'test condition' was defined by its population distribution, the period between consecutive sampled sections and the spacing between consecutive
Percent Errors in the Estimation of Demand for Secondary Items.
1985-11-01
percent errors, and the program change factor (PCF) to predict item demana during the procurement *’ leadtime (PROLT) ior the item. The PCF accounts for...type of demand it was. It may"-- have been demanded over two years ago or it may nave been a non-recurring demana . Since CC b only retains two years of...observed distributions could be compared with negative binomial distributions. For each item the computed ratio of actual demana to expected demand was
Nonparametric variance estimation in the analysis of microarray data: a measurement error approach.
Carroll, Raymond J; Wang, Yuedong
2008-01-01
This article investigates the effects of measurement error on the estimation of nonparametric variance functions. We show that either ignoring measurement error or direct application of the simulation extrapolation, SIMEX, method leads to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator. We further propose a permutation SIMEX method which leads to consistent estimators in theory. The performance of both SIMEX methods depends on approximations to the exact extrapolants. Simulations show that both SIMEX methods perform better than ignoring measurement error. The methodology is illustrated using microarray data from colon cancer patients.
Institute of Scientific and Technical Information of China (English)
Xiaogu ZHENG
2009-01-01
An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.
Sonderegger, Derek L; Wang, Haonan; Huang, Yao; Clements, William H
2009-10-01
The effect that measurement error of predictor variables has on regression inference is well known in the statistical literature. However, the influence of measurement error on the ability to quantify relationships between chemical stressors and biological responses has received little attention in ecotoxicology. We present a common data-collection scenario and demonstrate that the relationship between explanatory and response variables is consistently underestimated when measurement error is ignored. A straightforward extension of the regression calibration method is to use a nonparametric method to smooth the predictor variable with respect to another covariate (e.g., time) and using the smoothed predictor to estimate the response variable. We conducted a simulation study to compare the effectiveness of the proposed method to the naive analysis that ignores measurement error. We conclude that the method satisfactorily addresses the problem when measurement error is moderate to large, and does not result in a noticeable loss of power in the case where measurement error is absent.
Institute of Scientific and Technical Information of China (English)
Danping Yang; Yanzhen Chang; Wenbin Liu
2008-01-01
In this paper,we investigate a priori error estimates and superconvergence properties for a model optimal control problem of bilinear type,which includes some parameter estimation application.The state and co-state are discretized by piecewise linear functions and control is approximated by piecewise constant functions.We derive a priori error estimates and superconvergence analysis for both the control and the state approximations.We also give the optimal L2-norm error estimates and the almost optimal L8-norm estimates about the state and co-state.The results can be readily used for constructing a posteriori error estimators in adaptive finite element approximation of such optimal control problems.
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
Multiclass Bayes error estimation by a feature space sampling technique
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
Multiclass Bayes error estimation by a feature space sampling technique
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
LiDAR error estimation with WAsP engineering
DEFF Research Database (Denmark)
Bingöl, Ferhat; Mann, Jakob; Foussekis, D.
2008-01-01
The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR...... measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met. mast data have been collected and the same conditions are simulated with Riso/DTU software, WAsP Engineering 2.0. Finally measurement...
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
Energy Technology Data Exchange (ETDEWEB)
Duo, Jose I. [The Pennsylvania State University, 138 Reber Bldg, University Park (United States); Azmy, Yousry Y. [The Pennsylvania State University, 229 Reber Bldg, University Park (United States); Zikatanov, Ludmil T. [The Pennsylvania State University, 218 McAllister Bldg, University Park (United States)
2008-07-01
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. Error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posterior error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L{sub 2} error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of discrete variables unknowns solved for to achieve a prescribed solution accuracy in global L{sub 2} error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns. (authors)
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
Energy Technology Data Exchange (ETDEWEB)
Duo, Jose I. [Westinghouse Electric Co., 4350 Northern Pike, Monroeville, PA 15146 (United States)], E-mail: duoji@westinghouse.com; Azmy, Yousry Y. [North Carolina State University, 1110 Burlington Lab., Raleigh, NC 27695-7909 (United States)], E-mail: yyazmy@ncsu.edu; Zikatanov, Ludmil T. [The Pennsylvania State University, 218 McAllister Bldg, University Park (United States)
2009-04-15
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L{sub 2} error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L{sub 2} error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method.
An Adaptive Finite Element Method Based on Optimal Error Estimates for Linear Elliptic Problems
Institute of Scientific and Technical Information of China (English)
汤雁
2004-01-01
The subject of the work is to propose a series of papers about adaptive finite element methods based on optimal error control estimate. This paper is the third part in a series of papers on adaptive finite element methods based on optimal error estimates for linear elliptic problems on the concave corner domains. In the preceding two papers (part 1:Adaptive finite element method based on optimal error estimate for linear elliptic problems on concave corner domain; part 2:Adaptive finite element method based on optimal error estimate for linear elliptic problems on nonconvex polygonal domains), we presented adaptive finite element methods based on the energy norm and the maximum norm. In this paper, an important result is presented and analyzed. The algorithm for error control in the energy norm and maximum norm in part 1 and part 2 in this series of papers is based on this result.
Approach for wideband direction-of-arrival estimation in the presence of array model errors
Institute of Scientific and Technical Information of China (English)
Chen Deli; Zhang Cong; Tao Huamin; Lu Huanzhang
2009-01-01
The presence of array imperfection and mutual coupling in sensor arrays poses several challenges for development of effective algorithms for the direction-of-arrival (DOA) estimation problem in array processing. A correlation domain wideband DOA estimation algorithm without array calibration is proposed, to deal with these array model errors, using the arbitrary antenna array of omnidirectional elements. By using the matrix operators that have the memory and oblivion characteristics, this algorithm can separate the incident signals effectively. Compared with other typical wideband DOA estimation algorithms based on the subspace theory, this algorithm can get robust DOA estimation with regard to position error, gain-phase error, and mutual coupling, by utilizing a relaxation technique based on signal separation. The signal separation category and the robustness of this algorithm to the array model errors are analyzed and proved. The validity and robustness of this algorithm, in the presence of array model errors, are confirmed by theoretical analysis and simulation results.
PERFORMANCE OF THE ZERO FORCING PRECODING MIMO BROADCAST SYSTEMS WITH CHANNEL ESTIMATION ERRORS
Institute of Scientific and Technical Information of China (English)
Wang Jing; Liu Zhanli; Wang Yan; You Xiaohu
2007-01-01
In this paper, the effect of channel estimation errors upon the Zero Forcing (ZF) precoding Multiple Input Multiple Output Broadcast (MIMO BC) systems was studied. Based on the two kinds of Gaussian estimation error models, the performance analysis is conducted under different power allocation strategies. Analysis and simulation show that if the covariance of channel estimation errors is independent of the received Signal to Noise Ratio (SNR), imperfect channel knowledge deteriorates the sum capacity and the Bit Error Rate (BER) performance severely. However, under the situation of orthogonal training and the Minimum Mean Square Error (MMSE) channel estimation, the sum capacity and BER performance are consistent with those of the perfect Channel State Information (CSI)with only a performance degradation.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Influences of observation errors in eddy flux data on inverse model parameter estimation
Directory of Open Access Journals (Sweden)
G. Lasslop
2008-09-01
Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO_{2} and H_{2}O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO_{2} and H_{2}O fluxes were in all cases negligible (R^{2} below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux
ErrorCheck: A New Method for Controlling the Accuracy of Pose Estimates
DEFF Research Database (Denmark)
Holm, Preben Hagh Strunge; Petersen, Henrik Gordon
2011-01-01
In this paper, we present ErrorCheck, which is a new method for controlling the accuracy of a computer vision based pose refinement method. ErrorCheck consists of a way for validating robustness of a pose refinement method towards false correspondences and a way of controlling the accuracy...... of a validated pose refinement method. ErrorCheck uses a theoretical estimate of the pose error covariance both for validating robustness and controlling the accuracy.We illustrate the first usage of ErrorCheck by applying it to state-of-the-art methods for pose refinement and some variations of these methods...
Hall, Eric
2016-01-09
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.
Sandberg, Mattias
2015-01-07
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.
The effect of errors-in-variables on variance component estimation
Xu, Peiliang
2016-08-01
Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain
Error-Bars in Semi-Parametric Estimation
Van Ormondt, D.; Van der Veen, J.W.C.; Sima, D.M.; Graveron-Demilly, D.
In in vivo metabolite-quantitation with a magnetic resonance spectroscopy (MRS) scanner, the model function of the attendant MRS signal is often only partly known. This unfavourable condition requires semi-parametric estimation. In the present study the unknown part is the form of the decay function
Konings, A. G.; Gruber, A.; Mccoll, K. A.; Alemohammad, S. H.; Entekhabi, D.
2015-12-01
Validating large-scale estimates of geophysical variables by comparing them to in situ measurements neglects the fact that these in situ measurements are not generally representative of the larger area. That is, in situ measurements contain some `representativeness error'. They also have their own sensor errors. The naïve approach of characterizing the errors of a remote sensing or modeling dataset by comparison to in situ measurements thus leads to error estimates that are spuriously inflated by the representativeness and other errors in the in situ measurements. Nevertheless, this naïve approach is still very common in the literature. In this work, we introduce an alternative estimator of the large-scale dataset error that explicitly takes into account the fact that the in situ measurements have some unknown error. The performance of the two estimators is then compared in the context of soil moisture datasets under different conditions for the true soil moisture climatology and dataset biases. The new estimator is shown to lead to a more accurate characterization of the dataset errors under the most common conditions. If a third dataset is available, the principles of the triple collocation method can be used to determine the errors of both the large-scale estimates and in situ measurements. However, triple collocation requires that the errors in all datasets are uncorrelated with each other and with the truth. We show that even when the assumptions of triple collocation are violated, a triple collocation-based validation approach may still be more accurate than a naïve comparison to in situ measurements that neglects representativeness errors.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Institute of Scientific and Technical Information of China (English)
Wei Gong; Ningning Yan
2009-01-01
In this paper.we discuss the a posteriori error estimate of the finite element approximation for the boundary control problems governed by the parabolic partial differential equations.Three different a posteriori error estimators are provided for the parabolic boundary control problems with the observations of the distributed state.the boundary state and the final state.It is proven that these estimators are reliable bounds of the finite element approximation errors,which can be used as the indicators of the mesh refinement in adaptive finite element methods.
A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint
Barth, Timothy
2004-01-01
This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-01-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem
Delaigle, Aurore
2009-03-01
Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.
Minimum Mean-Square Error Single-Channel Signal Estimation
DEFF Research Database (Denmark)
Beierholm, Thomas
2008-01-01
are expressed and in the way the estimator is approximated. The starting point of the first method is prior probability density functions for both signal and noise and it is assumed that their Laplace transforms (moment generating functions) are available. The corresponding posterior mean integral that defines...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...... particle filtering using the reparameterized speech model because it is relative straightforward to exploit prior information about formant features. A modified MMSE estimator is introduced and performance of the particle filtering algorithm is compared to a state of the art hearing aid noise reduction...
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
DEFF Research Database (Denmark)
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo
2016-01-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...... than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p = 0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients....... This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel...
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.
Raykov, Tenko
2002-01-01
Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…
Round-Robin Analysis of Social Interaction: Exact and Estimated Standard Errors.
Bond, Charles F., Jr.; Lashley, Brian R.
1996-01-01
The Social Relations model of D. A. Kenny estimates variances and covariances from a round-robin of two-person interactions. This paper presents a matrix formulation of the Social Relations model, using the formulation to derive exact and estimated standard errors for round-robin estimates of Social Relations parameters. (SLD)
Estimates of errors of a gyroscope stabilized platform
Zbrutskiy, A. V.; Balabanov, I. V.
1984-08-01
A gyrostabilized platform has a four-frame cardan suspension in which one of the dynamically adjusted gyroscopes placed on the stabilized platform measures the angle of its deviation in the plane of the platform, while the second such gyroscope measures the deviation relative to this plane. The redundant first gyro can be used to correct the system and may also be a closed system itself. This paper studies the errors in the gyro stabilized platform due to the nonperpendicularity of the axes of the cardan suspension of the platform due to the nonperpendicularity of the axes of the cardan suspension of the platform as well as the disbalance of the components and dynamically adjustable gyroscopes. The cumbersome equations of motion for the system are written, neglecting dry frictional forces in the shafts of platform suspension, second order nonlinearities relative to the angular coordinates and their derivatives as well as terms with periodic coefficients which can affect the dynamics of the platform only in narrow ranges of frequency variations at parametric resonances.
Relative measurement error analysis in the process of the Nakagami-m fading parameter estimation
Directory of Open Access Journals (Sweden)
Milentijević Vladeta
2011-01-01
Full Text Available An approach to the relative measurement error analysis in the process of the Nakagami-m fading signal moments estimation will be presented in this paper. Relative error expressions will be also derived for the cases when MRC (Maximal Ratio Combining diversity technique is performed at the receiver. Capitalizing on them, results will be graphically presented and discussed to show the influence of various parameters, such as diversity order and fading severity on the relative measurement error bounds.
Development of a web-based simulator for estimating motion errors in linear motion stages
Khim, G.; Oh, J.-S.; Park, C.-H.
2017-08-01
This paper presents a web-based simulator for estimating 5-DOF motion errors in the linear motion stages. The main calculation modules of the simulator are stored on the server computer. The clients uses the client software to send the input parameters to the server and receive the computed results from the server. By using the simulator, we can predict performances such as 5-DOF motion errors, bearing and table stiffness by entering the design parameters in a design step before fabricating the stages. Motion errors are calculated using the transfer function method from the rail form errors which is the most dominant factor on the motion errors. To verify the simulator, the predicted motion errors are compared to the actually measured motion errors in the linear motion stage.
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
Error threshold estimation by means of the [[7,1,3
Salas, P J; Salas, Pedro J.; Sanz, Angel L.
2004-01-01
The states needed in a quantum computation are extremely affected by decoherence. Several methods have been proposed to control error spreading. They use two main tools: fault-tolerant constructions and concatenated quantum error correcting codes. In this work, we estimate the threshold conditions necessary to make a long enough quantum computation. The [[7,1,3
Measurement Error in Income and Schooling and the Bias of Linear Estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
2017-01-01
We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing and R...
Estimation of the wind turbine yaw error by support vector machines
DEFF Research Database (Denmark)
Sheibat-Othman, Nida; Othman, Sami; Tayari, Raoaa
2015-01-01
Wind turbine yaw error information is of high importance in controlling wind turbine power and structural load. Normally used wind vanes are imprecise. In this work, the estimation of yaw error in wind turbines is studied using support vector machines for regression (SVR). As the methodology...
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
Decorrelation of the True and Estimated Classifier Errors in High-Dimensional Settings
Directory of Open Access Journals (Sweden)
Hua Jianping
2007-01-01
Full Text Available The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error, in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting. Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, -fold cross-validation, and .632 bootstrap. Moreover, three scenarios are considered: (1 feature selection, (2 known-feature set, and (3 all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Norberg, Peder; Gaztanaga, Enrique; Croton, Darren J
2008-01-01
We present a test of different error estimators for 2-point clustering statistics, appropriate for present and future large galaxy redshift surveys. Using an ensemble of very large dark matter LambdaCDM N-body simulations, we compare internal error estimators (jackknife and bootstrap) to external ones (Monte-Carlo realizations). For 3-dimensional clustering statistics, we find that none of the internal error methods investigated are able to reproduce neither accurately nor robustly the errors of external estimators on 1 to 25 Mpc/h scales. The standard bootstrap overestimates the variance of xi(s) by ~40% on all scales probed, but recovers, in a robust fashion, the principal eigenvectors of the underlying covariance matrix. The jackknife returns the correct variance on large scales, but significantly overestimates it on smaller scales. This scale dependence in the jackknife affects the recovered eigenvectors, which tend to disagree on small scales with the external estimates. Our results have important implic...
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Hall, Eric Joseph
2016-12-08
We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.
Directory of Open Access Journals (Sweden)
R. Locatelli
2013-04-01
Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane
Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;
1993-01-01
Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Error estimates of H1-Galerkin mixed finite element method for Schr(o)dinger equation
Institute of Scientific and Technical Information of China (English)
LIU Yang; LI Hong; WANG Jin-feng
2009-01-01
An H1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.
Error covariance calculation for forecast bias estimation in hydrologic data assimilation
Pauwels, Valentijn R. N.; De Lannoy, Gabriëlle J. M.
2015-12-01
To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.
Directory of Open Access Journals (Sweden)
Islam M. Rafizul
2012-01-01
Full Text Available An important part of maintaining a solid waste landfill is managing the leachate through proper treatment to prevent pollution into the surrounding ground and surface water. Any assessment of potential impact of a landfill on groundwater quality requires consideration of the component of leachate most likely to cause an envionental impact as well as the source of concentration of those components. Leachate pollution index (LPI is an environmental index used to quantify and compare the leachate contamination potential of solid waste landfill. This index is based on concentration of 18 pollutants in leachate and their corresponding significance. That means, for calculating the LPI of a landfill, concentration of these 18 parameters are to be known. However, sometimes the data for all the 18 pollutants included in the LPI may not be available to calculate the LPI. In this study, the possible errors involved in calculating the LPI due to nonavailability of data are reported by the author. The leachate characteristic data for solid waste landfill at Chittagong in Bangladesh have been used to estimate these errors. Based on this study, it can be concluded that the errors may be high if the data for the pollutants having significantly high or low concentration are not available. However, LPI can be reported with a marginal error if the concentrations of the nonavailable pollutants are not completely biased.
Energy Technology Data Exchange (ETDEWEB)
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Measurement error in income and schooling, and the bias for linear estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Measurement error in income and schooling, and the bias of linear estimators
DEFF Research Database (Denmark)
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.
Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco
2011-01-01
The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Energy Technology Data Exchange (ETDEWEB)
Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)
2015-04-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.
How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?
Gebregiorgis, A. S.; Hossain, F.
2014-12-01
The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
Sparse representation discretization errors in multi-sensor radar target motion estimation
Azodi, Hossein; Siart, Uwe; Eibert, Thomas F.
2017-09-01
In a multi-sensor radar for the estimation of the targets motion states, more than one module of transmitter and receiver are utilized to estimate the positions and velocities of targets, also known as motion states. By applying the compressed sensing (CS) reconstruction algorithms, the surveillance space needs to be discretized. The effect of the additive errors due to the discretization are studied in this paper. The errors are considered as an additive noise in the well-known under-determined CS problem. By employing properties of these errors, analytical models for its average and variance are derived. Numerous simulations are carried out to verify the analytical model empirically. Furthermore, the probability density functions of discretization errors are estimated. The analytical model is useful for the optimization of the performance, the efficiency and the success rate in CS reconstruction for radar as well as many other applications.
Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging
DEFF Research Database (Denmark)
Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben
2013-01-01
was developed and tested. In this study, the simulation parameter space is extended to include higher levels of turbulence intensity. Furthermore, the method is applied to experimental data and compared with met-mast data corrected for a calibration error that was not discovered during previous work. Finally......, the shortcomings of using a spinner mounted LIDAR for yaw error estimation are discussed. The extended simulation study shows that with the applied method, the yaw error can be estimated with a precision of a few degrees, even in highly turbulent flows. Applying the method to experimental data reveals an average......When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device...
Jenke, Dennis; Odufu, Alex
2012-03-01
Substances from packaging systems that are leached into packaged medical products may have a safety impact on patients to whom such medical products are administered. The potential safety impact depends on the identity and concentration of the leached substances. The concentration above which a leachable must be identified in order to assess its safety impact is frequently estimated using an internal standard to "calibrate" the analytical response of a chromatographic system. Such an estimate is accurate to the extent that the responses of the internal standard and leachables are similar. To establish the accuracy of the internal standard approach, a database of gas chromatography-flame ionization detection (GC-FID) and gas chromatography-mass spectrometry (GC-MS) responses was generated for thirty-eight leachables and eight internal standard candidates. Although the FID and MS responses of many of the leachables and internal standards fell within a narrow band, acidic and basic compounds produced responses that were discernibly different from those of neutral analytes. While most of the internal standards were suited for concentration estimation, three of the candidates, dimethylphthalate, triphenylphosphate and 4,4-dibromobiphenyl, produced the smallest mean error in estimated concentration for the analytes examined. As the FID and MS responses were linear, internal standards could be used to estimate leachables concentrations even when the difference in leachable versus internal standard concentrations was as great as a factor of 25. A multiplier may be appropriate to adjust an estimated concentration to its greatest possible value, and it is this value that is used to convert an estimated Analytical Evaluation Threshold (AET) into a working or final AET.
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Refined Error Estimates for the Riccati Equation with Applications to the Angular Teukolsky Equation
Finster, Felix
2013-01-01
We derive refined rigorous error estimates for approximate solutions of Sturm-Liouville and Riccati equations with real or complex potentials. The approximate solutions include WKB approximations, Airy and parabolic cylinder functions, and certain Bessel functions. Our estimates are applied to solutions of the angular Teukolsky equation with a complex aspherical parameter in a rotating black hole Kerr geometry.
On Optimal Multichannel Mean-Squared Error Estimators for Speech Enhancement
Hendriks, R.C.; Heusdens, R.; Kjems, U.; Jensen, J.
2009-01-01
In this letter we present discrete Fourier transform (DFT) domain minimum mean-squared error (MMSE) estimators for multichannel noise reduction. The estimators are derived assuming that the clean speech magnitude DFT coefficients are generalized-Gamma distributed. We show that for Gaussian
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…
Error estimates for finite element solution for parabolic integro-differential equations
Directory of Open Access Journals (Sweden)
Hasan N. Ymeri
1993-05-01
Full Text Available In this paper we first study the stability of Ritz-Volterra projection and its maximum norm estimates, and then we use these results to derive some L\\infty error estimates for finite element methods for parabolic partial integro-differential equations.
A novel data-driven approach to model error estimation in Data Assimilation
Pathiraja, Sahani; Moradkhani, Hamid; Marshall, Lucy; Sharma, Ashish
2016-04-01
Error characterisation is a fundamental component of Data Assimilation (DA) studies. Effectively describing model error statistics has been a challenging area, with many traditional methods requiring some level of subjectivity (for instance in defining the error covariance structure). Recent advances have focused on removing the need for tuning of error parameters, although there are still some outstanding issues. Many methods focus only on the first and second moments, and rely on assuming multivariate Gaussian statistics. We propose a non-parametric, data-driven framework to estimate the full distributional form of model error, ie. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered, without needing to assign error characteristics/devise stochastic perturbations for individual components of model uncertainty (eg. input, parameter and structural). A training period is used to derive the error distribution of observed variables, conditioned on (potentially hidden) states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The framework is discussed in detail, and an application to a hydrologic case study with hidden states for one-day ahead streamflow prediction is presented. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard tuning approach.
Estimation of standard error of the parameter of change using simulations
Directory of Open Access Journals (Sweden)
Djordje Petkovic
2015-06-01
Full Text Available The main objective of this paper is to present the procedure for estimating standard error of parameter of change (index of turnover in R software (R core team, 2014 when samples are coordinated. The problem of estimating standard error is dealt with in the statistical literature by various types of approximations. In my paper I start from the method presented at the Consultation on Survey Methodology between Statistics Sweden and Statistical Office of the Republic of Serbia (SERSTAT 2013:22, make simulations and calculate estimate of the correlation and true value of standard error of change between turnovers from two years. I use two consecutive sampling frames of quarterly Structural Business Survey (SBS. These frames are updated with turnover from corresponding balance sheets. Important assumption is that annual turnover is highly correlated with quarterly turnover and that computed correlation can be referred to when comparing methods of estimation of correlation on the sample data.
Lamb, Masen; Correia, Carlos; Sauvage, Jean-François; Véran, Jean-Pierre; Andersen, David; Vigan, Arthur; Wizinowich, Peter; van Dam, Marcos; Mugnier, Laurent; Bond, Charlotte
2016-07-01
We propose and apply two methods for estimating phase discontinuities for two realistic scenarios on VLT and Keck. The methods use both phase diversity and a form of image sharpening. For the case of VLT, we simulate the `low wind effect' (LWE) which is responsible for focal plane errors in low wind and good seeing conditions. We successfully estimate the LWE using both methods, and show that using both methods both independently and together yields promising results. We also show the use of single image phase diversity in the LWE estimation, and show that it too yields promising results. Finally, we simulate segmented piston effects on Keck/NIRC2 images and successfully recover the induced phase errors using single image phase diversity. We also show that on Keck we can estimate both the segmented piston errors and any Zernike modes affiliated with the non-common path.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Shafran-Nathan, Rakefet; Yuval; Levy, Ilan; Broday, David M
2017-02-15
Accurate estimation of exposure to air pollution is necessary for assessing the impact of air pollution on the public health. Most environmental epidemiology studies assign the home address exposure to the study subjects. Here, we quantify the exposure estimation error at the population scale due to assigning it solely at the residence place. A cohort of most schoolchildren in Israel (~950,000), age 6-18, and a representative cohort of Israeli adults (~380,000), age 24-65, were used. For each subject the home and the work or school addresses were geocoded. Together, these two microenvironments account for the locations at which people are present during most of the weekdays. For each subject, we estimated ambient nitrogen oxide concentrations at the home and work or school addresses using two air quality models: a stationary land use regression model and a dynamic dispersion-like model. On average, accounting for the subjects' work or school address as well as for the daily pollutant variation reduced the estimation error of exposure to ambient NOx/NO2 by 5-10ppb, since daytime concentrations at work/school and at home can differ significantly. These results were consistent regardless which air quality model as used and even for subjects that work or study close to their home. Yet, due to their usually short commute, assigning schoolchildren exposure solely at their residential place seems to be a reasonable estimation. In contrast, since adults commute for longer distances, assigning exposure of adults only at the residential place has a lower correlation with the daily weighted exposure, resulting in larger exposure estimation errors. We show that exposure misclassification can result from not accounting for the subjects' time-location trajectories through the spatiotemporally varying pollutant concentrations field. Copyright © 2016 Elsevier B.V. All rights reserved.
Methods of gas hydrate concentration estimation with field examples
Digital Repository Service at National Institute of Oceanography (India)
Kumar, D.; Dash, R.; Dewangan, P.
different methods of gas hydrate concentration estimation that make use of data from the measurements of the seismic properties, electrical resistivity, chlorinity, porosity, density, and temperature are summarized in this paper. We demonstrate the methods...
Institute of Scientific and Technical Information of China (English)
BAI Xiufang; LI Siren; GONG Dejun; XU Yongping; JIANG Jingbo
2009-01-01
The objective of the study is to investigate the suitability of using Pulse-coherent Acoustic Doppler Profiler (PCADP) to estimate suspended sediment concentration (SSC). The acoustic backscatter intensity was corrected for spreading and absorption loss, then calibrated with OBS and finally converted to SSC. The results show that there is a good correlation between SSC and backscatter intensity with R value of 0.74. The mean relative error is 22.4%. Then the time span of little particle size variation was also analyzed to exclude the influence of size variation. The correlation coefficient increased to 0.81 and the error decreased to 18.9%. Our results suggest that the PCADP can meet the requirement of other professional instruments to estimate SSC with the errors between 20% and 50%, and can satisfy the need of dynamics study of suspended particles.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.
2017-01-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
Finster, Felix
2008-01-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by glueing together WKB and Airy solutions of corresponding one-dimensional Schr"odinger equations.
Performance of cumulant-based rank reduction estimator in presence of unexpected modeling errors
Institute of Scientific and Technical Information of China (English)
王鼎
2015-01-01
Compared with the rank reduction estimator (RARE) based on second-order statistics (called SOS-RARE), the RARE based on fourth-order cumulants (referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival (DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding (DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error (MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.
Relative measurement error analysis in the process of the Nakagami-m fading parameter estimation
Milentijević Vladeta; Denić Dragan; Stefanović Mihajlo; Panić Stefan R.; Radenković Dragan
2011-01-01
An approach to the relative measurement error analysis in the process of the Nakagami-m fading signal moments estimation will be presented in this paper. Relative error expressions will be also derived for the cases when MRC (Maximal Ratio Combining) diversity technique is performed at the receiver. Capitalizing on them, results will be graphically presented and discussed to show the influence of various parameters, such as diversity order and fading severity on the relative measurement...
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice.
EXPLICIT ERROR ESTIMATES FOR COURANT,CROUZEIX-RAVIART AND RAVIART-THOMAS FINITE ELEMENT METHODS
Institute of Scientific and Technical Information of China (English)
Carsten Carstensen; Joscha Gedicke; Donsub Rim
2012-01-01
The elementary analysis of this paper presents explicit expressions of the constants in the a priori error estimates for the lowest-order Courant,Crouzeix-Raviart nonconforming and Raviart-Thomas mixed finite element methods in the Poisson model problem.The three constants and their dependences on some maximal angle in the triangulation are indeed all comparable and allow accurate a priori error control.
Estimation of 3D reconstruction errors in a stereo-vision system
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
An a posteriori error estimator for shape optimization: application to EIT
Giacomini, M.; Pantz, O.; Trabelsi, K.
2015-11-01
In this paper we account for the numerical error introduced by the Finite Element approximation of the shape gradient to construct a guaranteed shape optimization method. We present a goal-oriented strategy inspired by the complementary energy principle to construct a constant-free, fully-computable a posteriori error estimator and to derive a certified upper bound of the error in the shape gradient. The resulting Adaptive Boundary Variation Algorithm (ABVA) is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion for the optimization loop. Some preliminary numerical results for the inverse identification problem of Electrical Impedance Tomography are presented.
Development and estimation of a semi-compensatory model with flexible error structure
DEFF Research Database (Denmark)
Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo
-response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two......, a disadvantage of current semi-compensatory models versus compensatory models is their behaviorally non-realistic assumption of an independent error structure. This study proposes a novel semi-compensatory model incorporating a flexible error structure. Specifically, the model represents a sequence...
Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries
Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.
The effect of sampling on estimates of lexical specificity and error rates.
Rowland, Caroline F; Fletcher, Sarah L
2006-11-01
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
Directory of Open Access Journals (Sweden)
Kharchenko P. M.
2015-10-01
Full Text Available At calculations, we have used the next assumptions: 1. Not excluded systematic errors distributed with equal probability; 2. Random errors are normally distributed; 3. Total error is the composition of not excluded systematic and random errors. In calculating of measurement error of pressure, we proceeded from working formula. The confidence interval of each variable less than instrumental error, therefore, to characterize the total error of the measured value P, we use the instrumental errors of all variables. In estimating of temperature measurement error was consider the systematic and random error. To estimate random error we used measurement data of the specific volume of water on six isotherms. Obtained values were compared with published data. As an approximate estimate of the random error of our experimental data, we can take it as a total for all the isotherms of the specific volume in comparison with the published data. For studied fractions confidence limit of total error of measurement results located in the range of 0,03 ч 0,1%. At temperatures close to the critical increasing influence of errors of reference and the error associated with the introduction of corrections on the thermal expansion of the piezometer. In the two-phase area confidence limit of total error increases and located between 0,08 ч 0,15%. This is due to the sharp increase in this area of reference error of pressure and error in determining to the weight of the substance in the piezometer
Directory of Open Access Journals (Sweden)
R. Locatelli
2013-10-01
Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly
Assessing the effect of estimation error on risk-adjusted CUSUM chart performance.
2015-12-01
Mark A. Jones, Stefan H. Steiner. Assessing the effect of estimation error on risk-adjusted CUSUM chart performance. Int J Qual Health Care (2012) 24(2): 176–181 doi: 10.1093/intqhc/mzr082. The authors would like to correct an error identified in the above paper. Table 5 included incorrect information. The correct table has been reprinted below. Furthermore, in the discussion on p. 180 of this paper, one of the incorrect numbers in Table 5 was quoted. This section is reproduced below with the correct numbers. In the case of homogeneous patients where adverse event risk was assumed to be constant at 6.6% the estimated level of estimation error: SD (ARL0) = 85.9 was less than the equivalent risk-adjusted scenario where SD (ARL0) = 89.2 but only by around 4%.
Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'
Errico, Ronald M.; Prive, Nikki C.; Gu, Wei
2014-01-01
The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.
The mean error estimation of TOPSIS method using a fuzzy reference models
Directory of Open Access Journals (Sweden)
Wojciech Sałabun
2013-04-01
Full Text Available The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS is a commonly used multi-criteria decision-making method. A number of authors have proposed improvements, known as extensions, of the TOPSIS method, but these extensions have not been examined with respect to accuracy. Accuracy estimation is very difficult because reference values for the obtained results are not known, therefore, the results of each extension are compared to one another. In this paper, the author propose a new method to estimate the mean error of TOPSIS with the use of a fuzzy reference model (FRM. This method provides reference values. In experiments involving 1,000 models, 28 million cases are simulated to estimate the mean error. Results of four commonly used normalization procedures were compared. Additionally, the author demonstrated the relationship between the value of the mean error and the nonlinearity of models and a number of alternatives.
Stenroos, Matti; Hauk, Olaf
2013-11-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.
Error estimate of Taylor's frozen-in flow hypothesis in the spectral domain
Narita, Yasuhito
2017-03-01
The quality of Taylor's frozen-in flow hypothesis can be measured by estimating the amount of the fluctuation energy mapped from the streamwise wavenumbers onto the Doppler-shifted frequencies in the spectral domain. For a random sweeping case with a Gaussian variation of the large-scale flow, the mapping quality is expressed by the error function which depends on the mean flow speed, the sweeping velocity, the frequency bin, and the frequency of interest. Both hydrodynamic and magnetohydrodynamic treatments are presented on the error estimate of Taylor's hypothesis with examples from the solar wind measurements.
Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition
van der Zee, Kristoffer G.
2010-10-27
A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
An Extended Result on the Optimal Estimation Under the Minimum Error Entropy Criterion
Directory of Open Access Journals (Sweden)
Badong Chen
2014-04-01
Full Text Available The minimum error entropy (MEE criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM, then the optimal MEE estimate (with Shannon entropy equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the α-order information potential (IP.
DEFF Research Database (Denmark)
Lowes, F.J.; Olsen, Nils
2004-01-01
Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However......, led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface...
Estimation of the minimum mRNA splicing error rate in vertebrates.
Skandalis, A
2016-01-01
The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons.
The estimation of analysis error characteristics using an observation systems simulation experiment
Energy Technology Data Exchange (ETDEWEB)
Errico, R.M. [Goddard Earth Sciences and Technology Center, Univ. of Maryland, Baltimore County (United States); Global Modeling and Assimilation Office, Goddard Space Flight Center, Greenbelt, MD (United States); Yang, R. [Science Systems and Applications Inc., Lanham, MD (United States); Global Modeling and Assimilation Office, Goddard Space Flight Center, Greenbelt, MD (United States); Masutani, M.; Woollen, J.S. [National Centers for Environmental Prediction, Camp Springs, MD (United States)
2007-12-15
Observation system simulation experiments (OSSEs) have been performed at the National Centers for Environmental Prediction primarily for the purpose of evaluating the forecast improvement potential of proposed new observation instruments. The simulations have been validated primarily by comparing results from corresponding data denial experiments in both simulated and real data assimilation contexts. Additional validation is presented here using comparisons of some statistics of analysis increments determined from a baseline simulation using the entire suite of observations utilized during a reanalysis for February 1993. By exploiting the availability of a data set representing ''truth'' in the simulations, the background and analysis errors produced for the baseline simulation are computed. Several statistics of these errors are then determined, including time means and variances as functions of location or spherical harmonic wave number, vertical correlations, Kalman gains, and balance as measured by projections onto normal modes. Use of an OSSE in this way is one of the few means of estimating analysis error characteristics. Although these simulation experiments are among the best calibrated ones existing, the additional validation here indicates that some unrealism remains present. With this caveat, several interesting characteristics of analysis error have been revealed. Among these are that: longitudinal variations of error variances in the Northern Hemisphere have a similar range as latitudinal variations; corresponding background and analysis error variances are very similar in most regions so that the Kalman gains are generally small with the notable exception of regions and times well observed by rawindsondes; correlation lengths (both vertical and horizontal) are very similar for background and analysis errors; error variances at horizontal scales shorter than those corresponding to approximately spherical harmonic wavenumber 70 are
Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator
DEFF Research Database (Denmark)
Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller
2014-01-01
This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors in c...... % underestimated volume flow according to the simulation. Volume flow estimates were corrected for the beam being off- axis, but was not able to significantly decrease the error rel ative to measurements with the reference method.......This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...
Finite Element Error Estimates for Critical Exponent Semilinear Problems without Angle Conditions
Bank, Randolph E; Szypowski, Ryan; Zhu, Yunrong
2011-01-01
In this article we consider a priori error estimates for semilinear problems with critical and subcritical polynomial nonlinearity in d space dimensions. When d=2 and d=3, it is well-understood how mesh geometry impacts finite element interpolant quality. However, much more restrictive conditions on angles are needed to derive basic a priori quasi-optimal error estimates as well as a priori pointwise estimates for Galerkin approximations. In this article, we show how to derive these types of a priori estimates without requiring the discrete maximum principle, hence eliminating the need for restrictive angle conditions that are difficult to satisfy in three dimensions or adaptive settings. We first describe a class of semilinear problems with critical exponents. The solution theory for this class of problems is then reviewed, including generalized maximum principles and the construction of a priori L-infinity bounds using cutoff functions and the De Giorgi iterative method (or Stampacchia truncation method). W...
OPTIMAL ERROR ESTIMATES FOR NEDELEC EDGE ELEMENTS FOR TIME-HARMONIC MAXWELL'S EQUATIONS
Institute of Scientific and Technical Information of China (English)
Liuqiang Zhong; Shi Shu; Gabriel Wittum; Jinchao Xu
2009-01-01
In this paper, we obtain optimal error estimates in both L2-norm and H(curl)-norm for the Nedelec edge finite element approximation of the time-harmonic Maxwell's equations on a general Lipschitz domain discretized on quasi-uniform meshes. One key to our proof is to transform the L2 error estimates into the L2 estimate of a discrete divergence-free function which belongs to the edge finite element spaces, and then use the approximation of the discrete divergence-free function by the continuous divergence-free function and a duality argument for the continuous divergence-free function. For Nedelec's second type elements, we present an optimal convergence estimate which improves the best results available in the literature.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.
2012-01-01
We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.
Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method
Directory of Open Access Journals (Sweden)
Li Husheng
2005-01-01
Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper discusses the dependence of the phase error on the 50 GHz bandwidth oscilloscope's sampling circuitry- We give the definition of the phase error as the difference between the impulse responses of the NTN (nose-to-nose) estimate and the true response of the sampling circuit. We develop a method to predict the NTN phase response arising from the internal sampling circuitry of the oscilloscope. For the default sampling-circuit configuration that we examine, our phase error is approximately 7.03 at 50 GHz. We study the sensitivity of the oscilloscope's phase response to parametric changes in sampling-circuit component values. We develop procedures to quantify the sensitivity of the phase error to each component and to a combination of components that depend on the fractional uncertainty in each of the model parameters as the same value, 10%. We predict the upper and lower bounds of phase error, that is, we vary all of the circuit parameters simultaneously in such a way as to increase the phase error, and then vary all of the circuit parameters to decrease the phase error. Based on Type B evaluation, this method qualifies the impresses of all parameters of the sampling circuit and gives the value of standard uncertainty, 1.34. This result is developed at the first time and has important practical uses. It can be used for phase calibration in the 50 GHz bandwidth large signal network analyzers (LSNAs).
Directory of Open Access Journals (Sweden)
Berhane Yemane
2008-03-01
estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
Nuclear power plant fault-diagnosis using neural networks with error estimation
Energy Technology Data Exchange (ETDEWEB)
Kim, K.; Bartlett, E.B.
1994-12-31
The assurance of the diagnosis obtained from a nuclear power plant (NPP) fault-diagnostic advisor based on artificial neural networks (ANNs) is essential for the practical implementation of the advisor to fault detection and identification. The objectives of this study are to develop an error estimation technique (EET) for diagnosis validation and apply it to the NPP fault-diagnostic advisor. Diagnosis validation is realized by estimating error bounds on the advisor`s diagnoses. The 22 transients obtained from the Duane Arnold Energy Center (DAEC) training simulator are used for this research. The results show that the NPP fault-diagnostic advisor are effective at producing proper diagnoses on which errors are assessed for validation and verification purposes.
Data driven estimation of imputation error-a strategy for imputation with a reject option
DEFF Research Database (Denmark)
Bak, Nikolaj; Hansen, Lars Kai
2016-01-01
with missing values by weighing the "true errors" by similarity. The method can also be used to test the performance of different imputation methods. A universal numerical threshold of acceptable error cannot be set since this will differ according to the data, research question, and analysis method...... indiscriminately. We note that the effects of imputation can be strongly dependent on what is missing. To help make decisions about which records should be imputed, we propose to use a machine learning approach to estimate the imputation error for each case with missing data. The method is thought....... The effect of threshold can be estimated using the complete cases. The user can set an a priori relevant threshold for what is acceptable or use cross validation with the final analysis to choose the threshold. The choice can be presented along with argumentation for the choice rather than holding...
On the BER and capacity analysis of MIMO MRC systems with channel estimation error
Yang, Liang
2011-10-01
In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.
Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions
Directory of Open Access Journals (Sweden)
E. Toth
2015-06-01
Full Text Available In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge. At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms, the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
DEFF Research Database (Denmark)
Voigt, Andreas Jauernik; Santos, Ilmar
2012-01-01
This paper gives an original theoretical and experimental contribution to the issue of reducing force estimation errors, which arise when applying Active Magnetic Bearings (AMBs) with pole embedded Hall sensors for force quantification purposes. Motivated by the prospect of increasing the usability...
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order ...
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper,we investigate the Legendre Galerkin spectral approximation of quadratic optimal control problems governed by parabolic equations.A spectral approximation scheme for the parabolic optimal control problem is presented.We obtain a posteriori error estimates of the approximated solutions for both the state and the control.
A POSTERIORI ERROR ESTIMATE OF THE DSD METHOD FOR FIRST-ORDER HYPERBOLIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
康彤; 余德浩
2002-01-01
A posteriori error estimate of the discontinuous-streamline diffusion method for first-order hyperbolic equations was presented, which can be used to adjust space mesh reasonably. A numerical example is given to illustrate the accuracy and feasibility of this method.
Bond, William Glenn
2012-01-01
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…
SUPERCONVERGENCE AND A POSTERIORI ERROR ESTIMATES FOR BOUNDARY CONTROL GOVERNED BY STOKES EQUATIONS
Institute of Scientific and Technical Information of China (English)
Hui-po Liu; Ning-ning Yan
2006-01-01
In this paper, the superconvergence results are derived for a class of boundary control problems governed by Stokes equations. We derive superconvergence results for both the control and the state approximation. Base on superconvergence results, we obtain asymptotically exact a posteriori error estimates.
L∞-error estimate for a system of elliptic quasivariational inequalities
Directory of Open Access Journals (Sweden)
M. Boulbrachene
2003-01-01
Full Text Available We deal with the numerical analysis of a system of elliptic quasivariational inequalities (QVIs. Under W2,p(Ω-regularity of the continuous solution, a quasi-optimal L∞-convergence of a piecewise linear finite element method is established, involving a monotone algorithm of Bensoussan-Lions type and standard uniform error estimates known for elliptic variational inequalities (VIs.
Error estimates for asymptotic solutions of dynamic equations on time scales
Directory of Open Access Journals (Sweden)
Gro Hovhannisyan
2007-02-01
Full Text Available We establish error estimates for first-order linear systems of equations and linear second-order dynamic equations on time scales by using calculus on a time scales [1,4,5] and Birkhoff-Levinson's method of asymptotic solutions [3,6,8,9].
Discretization error estimation and exact solution generation using the method of nearby problems.
Energy Technology Data Exchange (ETDEWEB)
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Can i just check...? Effects of edit check questions on measurement error and survey estimates
Lugtig, Peter; Jäckle, Annette
2014-01-01
Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to
Optimal error estimates for Fourier spectral approximation of the generalized KdV equation
Institute of Scientific and Technical Information of China (English)
Zhen-guo DENG; He-ping MA
2009-01-01
A Fourier spectral method for the generalized Korteweg-de Vrics equation with periodic boundary conditions is analyzed, and a corresponding optimal error esti-mate in L2-norm is obtained. It improves the result presented by Maday and Quarteroni. A modified Fourier pseudospectral method is also presented, with the same convergence properties as the Fourier spectral method.
Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.
2013-01-01
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, ar
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order ...
Huang, Weidong
2011-01-01
This paper presents the general equation to calculate the standard deviation of reflected ray error from optical error through geometry optics, applying the equation to calculate the standard deviation of reflected ray error for 8 kinds of solar concentrated reflector, provide typical results. The results indicate that the slope errors in two direction is transferred to any one direction of the focus ray when the incidence angle is more than 0 for solar trough and heliostats reflector; for point focus Fresnel lens, point focus parabolic glass mirror, line focus parabolic galss mirror, the error transferring coefficient from optical to focus ray will increase when the rim angle increase; for TIR-R concentrator, it will decrease; for glass heliostat, it relates to the incidence angle and azimuth of the reflecting point. Keywords: optic error, standard deviation, refractive ray error, concentrated solar collector
Estimation of contaminant subslab concentration in petroleum vapor intrusion
Yao, Yijun; Yang, Fangxing; Suuberg, Eric M.; Provoost, Jeroen; Liu, Weiping
2014-01-01
In this study, the development and partial validation are presented for an analytical approximation method for prediction of subslab contaminant concentrations in PVI. The method involves combining an analytic approximation to soil vapor transport with a piecewise first-order biodegradation model (together called the analytic approximation method, including biodegradation, AAMB), the result of which calculation provides an estimate of contaminant subslab concentrations, independent of buildin...
Carroll, Raymond J.
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Institute of Scientific and Technical Information of China (English)
Zhi-jia LIN; Zhuo ZHUANG BU
2014-01-01
An enriched goal-oriented error estimation method with extended degrees of freedom is developed to estimate the error in the continuum-based shell extended finite element method. It leads to high quality local error bounds in three-dimensional fracture mechanics simulation which involves enrichments to solve the singularity in crack tip. This enriched goal-oriented error estimation gives a chance to evaluate this continuum-based shell extended finite element method simulation. With comparisons of reliability to the stress intensity factor calculation in stretching and bending, the accuracy of the continuum-based shell extended finite element method simulation is evaluated, and the reason of error is discussed.
Estimation of radon concentration in dwellings in and around Guwahati
Dey, Gautam Kumar; Das, Projit Kumar
2012-02-01
It has been established that radon and its airborne decay products can present serious radiation hazards. A long term exposure to high concentration of radon causes lung cancer. Besides, it is also known that out of the total radiation dose received from natural and man-made sources, 60% of the dose is due to radon and its progeny. Taking this into account, an attempt has been made to estimate radon concentration in dwellings in and around Guwahati using aluminium dosimeter cups with CR-39 plastic detectors. Results of preliminary investigation presented in this paper show that the mean concentration is 21.31 Bq m - 3.
Estimation of radon concentration in dwellings in and around Guwahati
Indian Academy of Sciences (India)
Gautam Kumar Dey; Projit Kumar Das
2012-02-01
It has been established that radon and its airborne decay products can present serious radiation hazards. A long term exposure to high concentration of radon causes lung cancer. Besides, it is also known that out of the total radiation dose received from natural and man-made sources, 60% of the dose is due to radon and its progeny. Taking this into account, an attempt has been made to estimate radon concentration in dwellings in and around Guwahati using aluminium dosimeter cups with CR-39 plastic detectors. Results of preliminary investigation presented in this paper show that the mean concentration is 21.31 Bq m−3.
Wang, Wentao
2012-03-01
Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
Improving MIMO-OFDM decision-directed channel estimation by utilizing error-correcting codes
Directory of Open Access Journals (Sweden)
P. Beinschob
2009-05-01
Full Text Available In this paper a decision-directed Multiple-Input Multiple-Output (MIMO channel tracking algorithm is enhanced to raise the channel estimate accuracy. While DDCE is prone to error propagation the enhancement employs channel decoding in the tracking process. Therefore, a quantized block of symbols is checked on consistency via the channel decoder, possibly corrected and then used. This yields a more robust tracking of the channel in terms of bit error rate and improves the channel estimate under certain conditions.
Equalization is performed to prove the feasibility of the obtained channel estimate. Therefore a combined signal consisting of data and pilot symbols is sent. Adaptive filters are applied to exploit correlations in time, frequency and spatial domain. By using good error-correcting coding schemes like Turbo Codes or Low Density Parity Check (LDPC codes, adequate channel estimates can be acquired even at low signal to noise ratios (SNR. The proposed algorithm among two others is applied for channel estimation and equalization and results are compared.
Higher Order Mean Squared Error of Generalized Method of Moments Estimators for Nonlinear Models
Directory of Open Access Journals (Sweden)
Yi Hu
2014-01-01
Full Text Available Generalized method of moments (GMM has been widely applied for estimation of nonlinear models in economics and finance. Although generalized method of moments has good asymptotic properties under fairly moderate regularity conditions, its finite sample performance is not very well. In order to improve the finite sample performance of generalized method of moments estimators, this paper studies higher-order mean squared error of two-step efficient generalized method of moments estimators for nonlinear models. Specially, we consider a general nonlinear regression model with endogeneity and derive the higher-order asymptotic mean square error for two-step efficient generalized method of moments estimator for this model using iterative techniques and higher-order asymptotic theories. Our theoretical results allow the number of moments to grow with sample size, and are suitable for general moment restriction models, which contains conditional moment restriction models as special cases. The higher-order mean square error can be used to compare different estimators and to construct the selection criteria for improving estimator’s finite sample performance.
Stochastic error whitening algorithm for linear filter estimation with noisy data.
Rao, Yadunandana N; Erdogmus, Deniz; Rao, Geetha Y; Principe, Jose C
2003-01-01
Mean squared error (MSE) has been the most widely used tool to solve the linear filter estimation or system identification problem. However, MSE gives biased results when the input signals are noisy. This paper presents a novel stochastic gradient algorithm based on the recently proposed error whitening criterion (EWC) to tackle the problem of linear filter estimation in the presence of additive white disturbances. We will briefly motivate the theory behind the new criterion and derive an online stochastic gradient algorithm. Convergence proof of the stochastic gradient algorithm is derived making mild assumptions. Further, we will propose some extensions to the stochastic gradient algorithm to ensure faster, step-size independent convergence. We will perform extensive simulations and compare the results with MSE as well as total-least squares in a parameter estimation problem. The stochastic EWC algorithm has many potential applications. We will use this in designing robust inverse controllers with noisy data.
A fast algorithm for the estimation of statistical error in DNS (or experimental) time averages
Russo, Serena; Luchini, Paolo
2017-10-01
Time- and space-averaging of the instantaneous results of DNS (or experimental measurements) represent a standard final step, necessary for the estimation of their means or correlations or other statistical properties. These averages are necessarily performed over a finite time and space window, and are therefore more correctly just estimates of the 'true' statistical averages. The choice of the appropriate window size is most often subjectively based on individual experience, but as subtler statistics enter the focus of investigation, an objective criterion becomes desirable. Here a modification of the classical estimator of averaging error of finite time series, i.e. 'batch means' algorithm, will be presented, which retains its speed while removing its biasing error. As a side benefit, an automatic determination of batch size is also included. Examples will be given involving both an artificial time series of known statistics and an actual DNS of turbulence.
Admissibilities of linear estimator in a class of linear models with a multivariate t error variable
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Directory of Open Access Journals (Sweden)
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
Energy Technology Data Exchange (ETDEWEB)
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation
de Haan, Siebren
2016-08-01
Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.
Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.
Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang
2015-05-21
The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.
Energy Technology Data Exchange (ETDEWEB)
Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip
2009-08-01
Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Directory of Open Access Journals (Sweden)
Kim Hyang-Mi
2012-09-01
Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their
LSTA, Rawane Samb
2010-01-01
This thesis deals with the nonparametric estimation of density f of the regression error term E of the model Y=m(X)+E, assuming its independence with the covariate X. The difficulty linked to this study is the fact that the regression error E is not observed. In a such setup, it would be unwise, for estimating f, to use a conditional approach based upon the probability distribution function of Y given X. Indeed, this approach is affected by the curse of dimensionality, so that the resulting estimator of the residual term E would have considerably a slow rate of convergence if the dimension of X is very high. Two approaches are proposed in this thesis to avoid the curse of dimensionality. The first approach uses the estimated residuals, while the second integrates a nonparametric conditional density estimator of Y given X. If proceeding so can circumvent the curse of dimensionality, a challenging issue is to evaluate the impact of the estimated residuals on the final estimator of the density f. We will also at...
Lowenthal, Douglas H.; Hanumara, R. Choudary; Rahn, Kenneth A.; Currie, Lloyd A.
The Quail Roost II synthetic data set II was used to derive a comprehensive method of estimating uncertainties for chemical mass balance (CMB) apportionments. Collinearity-diagnostic procedures were applied to CMB apportionments of data set II to identify seriously collinear source profiles and evaluate the effects of the degree of collinearity on source-strength estimates and their uncertainties. Fractional uncertainties of CMB estimates were up to three times higher for collinear source profiles than for independent ones. A theoretical analysis of CMB results for synthetic data set II led to the following general conclusions about CMB methodology. Uncertainties for average estimated source strengths will be unrealistically low unless sources whose estimates are constrained to zero are included when calculating uncertainties. Covariance in source-strength estimates is caused by collinearity and systematic errors in source specification and composition. Propagated uncertainties may be underestimated unless covariances as well as variances of estimates are included. Apportioning the average aerosol will account for systematic errors only when the correct model is known, when measurement uncertainties in ambient and source-profile data are realistic, and when the source profiles are not collinear.
A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media
Chen, Huangxin
2016-12-09
In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Directory of Open Access Journals (Sweden)
Tianshuang Qiu
2007-12-01
Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of Ã¢Â€ÂœbiasedÃ¢Â€Â or Ã¢Â€ÂœunbiasedÃ¢Â€Â is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer F.; Clifton, Andrew
2016-08-01
Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Tang, Hong; Park, Yongwan; Qiu, Tianshuang
2007-12-01
This paper proposes a geometric method to locate a mobile station (MS) in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS) errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA), angle of arrival (AOA), and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of "biased" or "unbiased" is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
Budka, Marcin; Gabrys, Bogdan
2013-01-01
Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.
Quantifying and controlling biases in dark matter halo concentration estimates
Poveda-Ruiz, C N; Muñoz-Cuartas, J C
2016-01-01
We use bootstrapping to estimate the bias of concentration estimates on N-body dark matter halos as a function of particle number. We find that algorithms based on the maximum radial velocity and radial particle binning tend to overestimate the concentration by 15%-20% for halos sampled with 200 particles and by 7% - 10% for halos sampled with 500 particles. To control this bias at low particle numbers we propose a new algorithm that estimates halo concentrations based on the integrated mass profile. The method uses the full particle information without any binning, making it reliable in cases when low numerical resolution becomes a limitation for other methods. This method reduces the bias to less than 3% for halos sampled with 200-500 particles. The velocity and density methods have to use halos with at least 4000 particles in order to keep the biases down to the same low level. We also show that the mass-concentration relationship could be shallower than expected once the biases of the different concentrat...
Error Estimate and Adaptive Refinement in Mixed Discrete Least Squares Meshless Method
Directory of Open Access Journals (Sweden)
J. Amani
2014-01-01
Full Text Available The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Stroberg, Wylie; Schnell, Santiago
2016-12-01
The conditions under which the Michaelis-Menten equation accurately captures the steady-state kinetics of a simple enzyme-catalyzed reaction is contrasted with the conditions under which the same equation can be used to estimate parameters, KM and V, from progress curve data. Validity of the underlying assumptions leading to the Michaelis-Menten equation are shown to be necessary, but not sufficient to guarantee accurate estimation of KM and V. Detailed error analysis and numerical "experiments" show the required experimental conditions for the independent estimation of both KM and V from progress curves. A timescale, tQ, measuring the portion of the time course over which the progress curve exhibits substantial curvature provides a novel criterion for accurate estimation of KM and V from a progress curve experiment. It is found that, if the initial substrate concentration is of the same order of magnitude as KM, the estimated values of the KM and V will correspond to their true values calculated from the microscopic rate constants of the corresponding mass-action system, only so long as the initial enzyme concentration is less than KM. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhu, Fangqiang; Hummer, Gerhard
2012-02-05
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations.
Probability density function and estimation for error of digitized map coordinates in GIS
Institute of Scientific and Technical Information of China (English)
童小华; 刘大杰
2004-01-01
Traditionally, it is widely accepted that measurement error usually obeys the normal distribution. However, in this paper a new idea is proposed that the error in digitized data which is a major derived data source in GIS does not obey the normal distribution but the p-norm distribution with a determinate parameter. Assuming that the error is random and has the same statistical properties, the probability density function of the normal distribution,Laplace distribution and p-norm distribution are derived based on the arithmetic mean axiom, median axiom and pmedian axiom, which means that the normal distribution is only one of these distributions but not the least one.Based on this idea, distribution fitness tests such as Skewness and Kurtosis coefficient test, Pearson chi-square x2 test and Kolmogorov test for digitized data are conducted. The results show that the error in map digitization obeys the p-norm distribution whose parameter is close to 1.60. A least p-norm estimation and the least square estimation of digitized data are further analyzed, showing that the least p-norm adiustment is better than the least square adjustment for digitized data processing in GIS.
Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Bogner, K.; Pappenberger, F.
2011-07-01
River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.
Influence of Error in Estimating Anisotropy Parameters on VTI Depth Imaging
Directory of Open Access Journals (Sweden)
S. Y. Moussavi Alashloo
2016-01-01
Full Text Available Thin layers in sedimentary rocks lead to seismic anisotropy which makes the wave velocity dependent on the propagation angle. This aspect causes errors in seismic imaging such as mispositioning of migrated events if anisotropy is not accounted for. One of the challenging issues in seismic imaging is the estimation of anisotropy parameters which usually has error due to dependency on several elements such as sparse data acquisition and erroneous data with low signal-to-noise ratio. In this study, an isotropic and anelliptic VTI fast marching eikonal solvers are employed to obtain seismic travel times required for Kirchhoff depth migration algorithm. The algorithm solely uses compressional wave. Another objective is to study the influence of anisotropic errors on the imaging. Comparing the isotropic and VTI travel times demonstrates a considerable lateral difference of wavefronts. After Kirchhoff imaging with true anisotropy, as a reference, and with a model including error, results show that the VTI algorithm with error in anisotropic models produces images with minor mispositioning which is considerable for isotropic one specifically in deeper parts. Furthermore, over- or underestimating anisotropy parameters up to 30 percent are acceptable for imaging and beyond that cause considerable mispositioning.
Weiss-Weinstein Family of Error Bounds for Quantum Parameter Estimation
Lu, Xiao-Ming
2015-01-01
To approach the fundamental limits on the estimation precision for random parameters in quantum systems, we propose a quantum version of the Weiss-Weinstein family of lower bounds on estimation errors. The quantum Weiss-Weinstein bounds (QWWB) include the popular quantum Cram\\'er-Rao bound (QCRB) as a special case, and do not require the differentiability of prior distributions and conditional quantum states as the QCRB does; thus, the QWWB is a superior alternative for the QCRB. We show that the QWWB well captures the insurmountable error caused by the ambiguity of the phase in quantum states, which cannot be revealed by the QCRB. Furthermore, we use the QWWB to expose the possible shortcomings of the QCRB when the number of independent and identically distributed systems is not sufficiently large.
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti
2013-01-01
We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Yong Huang
2017-01-01
Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.
Common phase error estimation in coherent optical OFDM systems using best-fit bounding box.
Bo, Tianwai; Chan, Chun-Kit
2016-10-17
In this paper, we investigate and characterize a new approach of adopting best-fit bounding box method for common phase error estimation in coherent optical OFDM systems. The method is based on the calculation of the 2-D convex hull of the received signal constellation, which is generally adopted in image processing area to correct the skew of images. We further perform detailed characterizations including root mean square error analysis, laser linewidth tolerance, noise tolerance, and computation complexity analysis, via numerical simulations and experiments. The results show the proposed method achieves much improved spectral efficiency and comparable system performance than the pilot-aided method, while it exhibits good estimation accuracy and reduced complexity than the blind phase searching method.
Jin, T.; Qiu, X.; Hu, D.; Ding, C.
2017-09-01
Multichannel synthetic aperture radar (SAR) is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS) faced with conventional SAR. Error estimation and unambiguous reconstruction are two crucial techniques for obtaining high-quality imagery. This paper demonstrates the experimental results of the two techniques for Chinese first dualchannel spaceborne SAR imaging. The model of Chinese Gaofen-3 dual-channel mode is established and the mechanism of channel mismatches is first discussed. Particularly, we propose a digital beamforming (DBF) process composed of the subspace-based error estimation algorithm and the reconstruction algorithm before imaging. The results exhibit the effective suppression of azimuth ambiguities with the proposed DBF process, and indicate the feasibility of this technique for future HRWS SAR systems.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level.
Estimation of random errors for lidar based on noise scale factor
Wang, Huan-Xue; Liu, Jian-Guo; Zhang, Tian-Shu
2015-08-01
Estimation of random errors, which are due to shot noise of photomultiplier tube (PMT) or avalanche photodiode (APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship, noise scale factor (NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable. Project supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB05040300) and the National Natural Science Foundation of China (Grant No. 41205119).
An information-guided channel-hopping scheme for block-fading channels with estimation errors
Yang, Yuli
2010-12-01
Information-guided channel-hopping technique employing multiple transmit antennas was previously proposed for supporting high data rate transmission over fading channels. This scheme achieves higher data rates than some mature schemes, such as the well-known cyclic transmit antenna selection and space-time block coding, by exploiting the independence character of multiple channels, which effectively results in having an additional information transmitting channel. Moreover, maximum likelihood decoding may be performed by simply decoupling the signals conveyed by the different mapping methods. In this paper, we investigate the achievable spectral efficiency of this scheme in the case of having channel estimation errors, with optimum pilot overhead for minimum meansquare error channel estimation, when transmitting over blockfading channels. Our numerical results further substantiate the robustness of the presented scheme, even with imperfect channel state information. ©2010 IEEE.
Improving Multiyear Ice Concentration Estimates with Reanalysis Air Temperatures
Ye, Y.; Shokr, M.; Heygster, G.; Spreen, G.
2015-12-01
Multiyear ice (MYI) characteristics can be retrieved from passive or active microwave remote sensing observations. One of the algorithms that combine both of observations to identify partial concentrations of ice types (including MYI) is the Environment Canada's Ice Concentration Extractor (ECICE). However, cycles of warm/cold air temperature trigger wet-refreeze cycles of the snow cover on MYI ice surface. Under wet snow conditions, anomalous brightness temperature and backscatter, similar to those of first year ice (FYI) are observed. This leads to misidentification of MYI as being FYI, hence decreasing the estimated MYI concentration suddenly. The purpose of this study is to introduce a correction scheme to restore the MYI concentration under this condition. The correction is based on air temperature records. It utilizes the fact that the warm spell in autumn lasts for a short period of time (a few days). The correction is applied to MYI concentration results from ECICE using an input of combined QuikSCAT and AMSR-E data; acquired over the Arctic region in a series of autumn seasons from 2003 to 2008. The correction works well by replacing anomalous MYI concentrations with interpolated ones. For September of the six years, it introduces over 0.1×106 km2 MYI area except for 2005. Due to the regional effect of the warm air spells, the correction could be important in the operational applications where small and meso scale ice concentrations are crucial.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal
2017-04-01
The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable
Filtering Error Estimates and Order of Accuracy via the Peano Kernel Theorem
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2011-02-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise. The concept of the order of accuracy of a filter is introduced and used as an organizing principle to compare the accuracy of different filters.
Minimization and error estimates for a class of the nonlinear Schrodinger eigenvalue problems
Institute of Scientific and Technical Information of China (English)
MurongJIANG; JiachangSUN
2000-01-01
It is shown that the nonlinear eigenvaiue problem can be transformed into a constrained functional problem. The corresponding minimal function is a weak solution of this nonlinear problem. In this paper, one type of the energy functional for a class of the nonlinear SchrSdinger eigenvalue problems is proposed, the existence of the minimizing solution is proved and the error estimate is given out.
The Solution Structure and Error Estimation for The Generalized Linear Complementarity Problem
Directory of Open Access Journals (Sweden)
Tingfa Yan
2014-07-01
Full Text Available In this paper, we consider the generalized linear complementarity problem (GLCP. Firstly, we develop some equivalent reformulations of the problem under milder conditions, and then characterize the solution of the GLCP. Secondly, we also establish the global error estimation for the GLCP by weakening the assumption. These results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
Directory of Open Access Journals (Sweden)
Lee HyunYoung
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
Identification of Nonlinear Rational Systems Using A Prediction-Error Estimation Algorithm
1987-01-01
Identification of discrete-time noninear stochastic systems which can be represented by a rational input-output model is considered. A prediction-error parameter estimation algorithm is developed and a criterion is derived using results from the theory of hypothesis testing to determine the correct model structure. The identification of a simulated system and a heat exchanger are included to illustrate the algorithms.
A flexible error estimate for the application of centre manifold theory
Li, Zhenquan; Roberts, A. J.
2000-01-01
In applications of centre manifold theory we need more flexible error estimates than that provided by, for example, the Approximation Theorem~3 by Carr (1981,1983). Here we extend the theory to cover the case where the order of approximation in parameters and that in dynamical variables may be completely different. This allows, for example, the effective evaluation of low-dimensional dynamical models at finite parameter values.
Dreano, D.
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
Estimation of contaminant subslab concentration in vapor intrusion
Yao, Yijun; Pennell, Kelly G.; Suuberg, Eric M.
2012-01-01
This study is concerned with developing a method to estimate subslab perimeter crack contaminant concentration for structures built atop a vapor source. A simple alternative to the widely-used but restrictive one-dimensional (1-D) screening models is presented and justified by comparing to predictions from a three-dimensional (3-D) CFD model. A series of simulations were prepared for steady-state transport of a non-biodegradable contaminant in homogenous soil for different structure construct...
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Eaton, Jeffrey W; Bao, Le
2017-04-01
The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Mathematical model fitted to surveillance data with Bayesian inference. We introduce a variance inflation parameter (Equation is included in full-text article.)that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating (Equation is included in full-text article.)using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Introducing the additional variance parameter (Equation is included in full-text article.)increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9-3.8). Using only sampling error in ANC-SS prevalence (Equation is included in full-text article.), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter (Equation is included in full-text article.). The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating (Equation is included in full-text article.)did not increase the computational cost of model fitting. We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Error Estimates for Finite-Element Navier-Stokes Solvers without Standard Inf-Sup Conditions
Institute of Scientific and Technical Information of China (English)
JianGuo LIU; Jie LIU; Robert L.PEGO
2009-01-01
The authors establish error estimates for recently developed finite-element methods for incompressible viscous flow in domains with no-slip boundary conditions. The methods arise by discretization of a well-posed extended Navier-Stokes dynamics for which pressure is determined from current velocity and force fields. The methods use C1 elements for velocity and C0 elements for pressure. A stability estimate is proved for a related finite-element projection method close to classical time-splitting methods of Orszag, Israeli, DeVille and Karniadakis.
Impact of channel estimation error on channel capacity of multiple input multiple output system
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In order to investigate the impact of channel estimation error on channel capacity of multiple input multiple output (MIMO) system, a novel method is proposed to explore the channel capacity in correlated Rayleigh fading environment. A system model is constructed based on the channel estimation error at receiver side. Using the properties of Wishart distribution, the lower bound of the channel capacity is derived when the MIMO channel is of full rank. Then a method is proposed to select the optimum set of transmit antennas based on the lower bound of the mean channel capacity. The novel method can be easily implemented with low computational complexity. The simulation results show that the channel capacity of MIMO system is sensitive to channel estimation error, and is maximized when the signal-to-noise ratio increases to a certain point. Proper selection of transmit antennas can increase the channel capacity of MIMO system by about 1 bit/s in a flat fading environment with deficient rank of channel matrix.
Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure
Directory of Open Access Journals (Sweden)
Robert G. MacCann
2004-03-01
Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator
Directory of Open Access Journals (Sweden)
Joaquin Ballesteros
2016-11-01
Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.
A Homogeneous Linear Estimation Method for System Error in Data Assimilation
Institute of Scientific and Technical Information of China (English)
WU Wei; WU Zengmao; GAO Shanhong; ZHENG Yi
2013-01-01
In this paper,a new bias estimation method is proposed and applied in a regional ensemble Kalman filter (EnKF) based on the Weather Research and Forecasting (WRF) Model.The method is based on a homogeneous linear bias model,and the model bias is estimated using statistics at each assimilation cycle,which is different from the state augmentation methods proposed in previous literatures.The new method provides a good estimation for the model bias of some specific variables,such as sea level pressure (SLP).A series of numerical experiments with EnKF are performed to examine the new method under a severe weather condition.Results show the positive effect of the method on the forecasting of circulation pattern and meso-scale systems,and the reduction of analysis errors.The background error covariance structures of surface variables and the effects of model system bias on EnKF are also studied under the error covariance structures and a new concept 'correlation scale' is introduced.However,the new method needs further evaluation with more cases of assimilation.
Sampling error study for rainfall estimate by satellite using a stochastic model
Shin, Kyung-Sup; North, Gerald R.
1988-01-01
In a parameter study of satellite orbits, sampling errors of area-time averaged rain rate due to temporal sampling by satellites were estimated. The sampling characteristics were studied by accounting for the varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates. For a satellite of nominal Tropical Rainfall Measuring Mission (Thiele, 1987) carrying an ideal scanning microwave radiometer for precipitation measurements, it is found that sampling error would be about 8 to 12 pct of estimated monthly mean rates over a grid box of 5 X 5 degrees. It is suggested that an observation system based on a low inclination satellite combined with a sunsynchronous satellite simultaneously might be the best candidate for making precipitation measurements from space.
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Arakawa, Takuya; Yamamoto, Toshihiro; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-08-01
Estimation accuracy for subcriticality on `Indirect Estimation Method for Calculation Error` is expressed in form of {rho}{sub m} - {rho}{sub C} = K ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}). This expression means that estimation accuracy for subcriticality is proportional to ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}) as estimation accuracy of buckling for axial direction. The proportional constant K is calculated, but the influence of the uncertainty of K to estimation accuracy for subcriticality is smaller than in case of comparison for {rho}{sub m} = -K ({gamma}{sub zm}{sup 2} + B{sub z}{sup 2}) with calculated {rho}{sub c}. When the values of K were calculated, the estimation accuracy is kept enough. If {gamma}{sub zc}{sup 2} equal to {gamma}{sub zm}{sup 2}, {rho}{sub c} equal to {rho}{sub m}. Reliability of this method is shown on base of results in which are calculated using MCNP 4A for four subcritical cores of TCA. (author)
Institute of Scientific and Technical Information of China (English)
Xiaobing Feng; Haijun Wu
2008-01-01
This paper develops a posteriori error estimates of residual type for conforming and mixed finite element approximations of the fourth order Cahn-Hilliard equation ut +△ (ε△μ-ε-1f(u))=0. It is shown that the a posteriori error bounds depends on ε-1 only in some low polynomial order, instead of exponential order. Using these a posteriori error estimates, we construct an adaptive algorithm for computing the solution of the Cahn-Hilliard equation and its sharp interface limit, the Hele-Shaw flow. Numerical experiments are presented to show the robustness and effectiveness of the new error estimators and the proposed adaptive algorithm.
Real-time total system error estimation:Modeling and application in required navigation performance
Institute of Scientific and Technical Information of China (English)
Fu Li; Zhang Jun; Li Rui
2014-01-01
In required navigation performance (RNP), total system error (TSE) is estimated to pro-vide a timely warning in the presence of an excessive error. In this paper, by analyzing the under-lying formation mechanism, the TSE estimation is modeled as the estimation fusion of a fixed bias and a Gaussian random variable. To address the challenge of high computational load induced by the accurate numerical method, two efficient methods are proposed for real-time application, which are called the circle tangent ellipse method (CTEM) and the line tangent ellipse method (LTEM), respectively. Compared with the accurate numerical method and the traditional scalar quantity summation method (SQSM), the computational load and accuracy of these four methods are exten-sively analyzed. The theoretical and experimental results both show that the computing time of the LTEM is approximately equal to that of the SQSM, while it is only about 1/30 and 1/6 of that of the numerical method and the CTEM. Moreover, the estimation result of the LTEM is parallel with that of the numerical method, but is more accurate than those of the SQSM and the CTEM. It is illustrated that the LTEM is quite appropriate for real-time TSE estimation in RNP application.
A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data
Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana
2016-09-01
A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water vapour
Institute of Scientific and Technical Information of China (English)
SUN Liuquan; ZHENG Zhongguo
1999-01-01
A central limit theorem for the integrated square error (ISE)of the kernel hazard rate estimators is obtained based on left truncated and right censored data.An asymptotic representation of the mean integrated square error(MISE) for the kernel hazard rate estimators is also presented.
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
2017-01-01
In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool is ...
Directory of Open Access Journals (Sweden)
Orlov A. I.
2015-05-01
Full Text Available Estimates of the errors of the characteristics of financial flows of investment projects are needed to make adequate management decisions, particularly in the rocket and the space industry. Organizational-economic approaches to the estimations of the feasibility of innovation-investment projects to create rocket and space technologies require intensive use of numerical characteristics of the financial flows of long-term projects of this type. In organizational-economic support for control problems in the aerospace industry we must provide the need to obtain the estimates of the errors of the characteristics of financial flows. Such estimates are an integral part of the organizational-economic support of innovation activity in the aerospace industry. They can be compared with the predictions interval, i.e. confidence estimation of predictive values. Half the length of the confidence interval is the prediction error estimate. In this article we give the new method for estimating the errors of the main characteristics of the investment projects. We focus on the net present value called NPV. Our method of estimation of errors is based on the results of statistics interval data, which is an integral part of the system fuzzy interval mathematics. We construct asymptotic theory which corresponds to small deviations of discount coefficients. The error of NPV has been found as the asymptotic notna. With up to infinitesimals of higher orders the error of NPV is a linear function of the maximum possible error of discount coefficients
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
Sparks, Lawrence
2013-01-01
Current satellite-based augmentation systems estimate ionospheric delay using algorithms that assume the electron density of the ionosphere is non-negligible only in a thin shell located near the peak of the actual profile. In its initial operating capability, for example, the Wide Area Augmentation System incorporated the thin shell model into an estimation algorithm that calculates vertical delay using a planar fit. Under disturbed conditions or at low latitude where ionospheric structure is complex, however, the thin shell approximation can serve as a significant source of estimation error. A recent upgrade of the system replaced the planar fit algorithm with an algorithm based upon kriging. The upgrade owes its success, in part, to the ability of kriging to mitigate the error due to this approximation. Previously, alternative delay estimation algorithms have been proposed that eliminate the need for invoking the thin shell model altogether. Prior analyses have compared the accuracy achieved by these methods to the accuracy achieved by the planar fit algorithm. This paper extends these analyses to include a comparison with the accuracy achieved by kriging. It concludes by examining how a satellite-based augmentation system might be implemented without recourse to the thin shell approximation.
Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru
2015-07-01
High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.
Directory of Open Access Journals (Sweden)
L.M. Pereira
2001-09-01
Full Text Available In this article we compare two different techniques to measure the concentration of saline solutions for the identification of the apparent mass diffusion coefficient in soils saturated with distilled water. They are the radiation measurement technique and the electrical conductivity measurement technique. These techniques are compared in terms of measured quantities, sensitivity coefficients with respect to unknown parameters and the determinant of the information matrix. The apparent mass diffusion coefficient is estimated by utilizing simulated measurements containing random errors. The Levenberg-Marquardt method of minimization of the least-squares norm is used as the parameter estimation procedure. The effects of the volume of saline solution injected into the column devised for the experiments on the accuracy of the estimated parameters are also addressed in this article.
Estimation of contaminant subslab concentration in petroleum vapor intrusion.
Yao, Yijun; Yang, Fangxing; Suuberg, Eric M; Provoost, Jeroen; Liu, Weiping
2014-08-30
In this study, the development and partial validation are presented for an analytical approximation method for prediction of subslab contaminant concentrations in PVI. The method involves combining an analytic approximation to soil vapor transport with a piecewise first-order biodegradation model (together called the Analytic Approximation Method, including Biodegradation, AAMB), the result of which calculation provides an estimate of contaminant subslab concentrations, independent of building operation conditions. Comparisons with three-dimensional (3-D) simulations and another PVI screening tool, BioVapor, show that the AAMB is suitable for application in a scenario involving a building with an impermeable foundation surrounded by open ground surface, where the atmosphere is regarded as the primary oxygen source. Predictions from the AAMB can be used to determine the required vertical source-building separation, given a subslab screening concentration, allowing identification of buildings at risk for PVI. This equation shows that the "vertical screening distance" suggested by U.S. EPA is sufficient in most cases, as long as the total petroleum hydrocarbon (TPH) soil gas concentration at the vapor source does not exceed 50-100mg/L. When the TPH soil gas concentration of the vapor source approaches a typical limit, i.e. 400mg/L, the "vertical screening distance" required would be much greater.
Estimation of contaminant subslab concentration in petroleum vapor intrusion
Yao, Yijun; Yang, Fangxing; Suuberg, Eric M.; Provoost, Jeroen; Liu, Weiping
2015-01-01
In this study, the development and partial validation are presented for an analytical approximation method for prediction of subslab contaminant concentrations in PVI. The method involves combining an analytic approximation to soil vapor transport with a piecewise first-order biodegradation model (together called the analytic approximation method, including biodegradation, AAMB), the result of which calculation provides an estimate of contaminant subslab concentrations, independent of building operation conditions. Comparisons with three-dimensional (3-D) simulations and another PVI screening tool, BioVapor, show that the AAMB is suitable for application in a scenario involving a building with an impermeable foundation surrounded by open ground surface, where the atmosphere is regarded as the primary oxygen source. Predictions from the AAMB can be used to determine the required vertical source-building separation, given a subslab screening concentration, allowing identification of buildings at risk for PVI. This equation shows that the “vertical screening distance” suggested by U.S. EPA is sufficient in most cases, as long as the total petroleum hydrocarbon (TPH) soil gas concentration at the vapor source does not exceed 50–100 mg/L. When the TPH soil gas concentration of the vapor source approaches a typical limit, i.e. 400 mg/L, the “vertical screening distance” required would be much greater. PMID:25124892
Frommer, A; Lippert, Th; Rittich, H
2012-01-01
The Lanczos process constructs a sequence of orthonormal vectors v_m spanning a nested sequence of Krylov subspaces generated by a hermitian matrix A and some starting vector b. In this paper we show how to cheaply recover a secondary Lanczos process, starting at an arbitrary Lanczos vector v_m and how to use this secondary process to efficiently obtain computable error estimates and error bounds for the Lanczos approximations to a solution of a linear system Ax = b as well as, more generally, for the Lanczos approximations to the action of a rational matrix function on a vector. Our approach uses the relation between the Lanczos process and quadrature as developed by Golub and Meurant. It is different from methods known so far because of its use of the secondary Lanczos process. With our approach, it is now in particular possible to efficiently obtain upper bounds for the error in the 2-norm, provided a lower bound on the smallest eigenvalue of A is known. This holds for the error of the cg iterates as well ...
Error Estimations in an Approximation on a Compact Interval with a Wavelet Bases
Directory of Open Access Journals (Sweden)
Dr. Marco Schuchmann
2013-11-01
Full Text Available By an approximation with a wavelet base we have in practise not only an error if the function y is not in Vj . There we have a second error because we do not use all bases functions. If the wavelet has a compact support we have no error by using only a part of all basis function. If we need an approximation on a compact interval I (which we can do even if y is not quadratic integrable on R, because in that case it must only be quadratic integrable on I leads to worse approximations if we calculate an orthogonal projection from 1I y in Vj. We can get much better approximations, if we apply a least square approximation with points in I. Here we will see, that this approximation can be much better th an a orthogonal projection form y or 1I y in Vj . With the Shannon wavelet, which has no compact support, we saw in many simulations, that a least square approximation can lead to much better results than with well known wavelets with compact support. So in that article we do an error estimation for the Shannon wavelet, if we use not all bases coefficients.
PEET: a Matlab tool for estimating physical gate errors in quantum information processing systems
Hocker, David; Kosut, Robert; Rabitz, Herschel
2016-09-01
A Physical Error Estimation Tool (PEET) is introduced in Matlab for predicting physical gate errors of quantum information processing (QIP) operations by constructing and then simulating gate sequences for a wide variety of user-defined, Hamiltonian-based physical systems. PEET is designed to accommodate the interdisciplinary needs of quantum computing design by assessing gate performance for users familiar with the underlying physics of QIP, as well as those interested in higher-level computing operations. The structure of PEET separates the bulk of the physical details of a system into Gate objects, while the construction of quantum computing gate operations are contained in GateSequence objects. Gate errors are estimated by Monte Carlo sampling of noisy gate operations. The main utility of PEET, though, is the implementation of QuantumControl methods that act to generate and then test gate sequence and pulse-shaping techniques for QIP performance. This work details the structure of PEET and gives instructive examples for its operation.
Evaluation of the sources of error in the linepack estimation of a natural gas pipeline
Energy Technology Data Exchange (ETDEWEB)
Marco, Fabio Capelassi Gavazzi de [Transportadora Brasileira Gasoduto Bolivia-Brasil S.A. (TBG), Rio de Janeiro, RJ (Brazil)
2012-07-01
The intent of this work is to explore the behavior of the random error associated with determination of linepack in a complex natural gas pipeline based on the effect introduced by the uncertainty of the different variables involved. There are many parameters involved in the determination of the gas inventory in a transmission pipeline: geometrical (diameter, length and elevation profile), operational (pressure, temperature and gas composition), environmental (ambient / ground temperature) and those dependent on the modeling assumptions (compressibility factor and heat transfer coefficient). Due to the extent of a natural gas pipeline and the vast amount of sensor involved it is infeasible to determine analytically the magnitude of resulting uncertainty in the linepack, thus this problem has been addressed using Monte Carlo Method. The approach consists of introducing random errors in the values of pressure, temperature and gas gravity that are employed in the determination of the linepack and verify its impact. Additionally, the errors associated with three different modeling assumptions to estimate the linepack are explored. The results reveal that pressure is the most critical variable while the temperature is the less critical. In regard to the different methods to estimate the linepack, deviations around 1.6% were verified among the methods. (author)
Error estimates for density-functional theory predictions of surface energy and work function
De Waele, Sam; Lejaeghere, Kurt; Sluydts, Michael; Cottenier, Stefaan
2016-12-01
Density-functional theory (DFT) predictions of materials properties are becoming ever more widespread. With increased use comes the demand for estimates of the accuracy of DFT results. In view of the importance of reliable surface properties, this work calculates surface energies and work functions for a large and diverse test set of crystalline solids. They are compared to experimental values by performing a linear regression, which results in a measure of the predictable and material-specific error of the theoretical result. Two of the most prevalent functionals, the local density approximation (LDA) and the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA), are evaluated and compared. Both LDA and GGA-PBE are found to yield accurate work functions with error bars below 0.3 eV, rivaling the experimental precision. LDA also provides satisfactory estimates for the surface energy with error bars smaller than 10%, but GGA-PBE significantly underestimates the surface energy for materials with a large correlation energy.
Estimating contributions to ambient concentrations in Fort McKay
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-05-15
The Trace Metal and Air Contaminants (TMAC) Working Group of the Cumulative Effects Environmental Management Association (CEMA) conducts ongoing assessments of the effects of air emissions on people living in the oil sands region of Alberta. An air emissions inventory was recently conducted by the group to identify 41 substances within the region. The inventory was then used to conduct a dispersion modelling assessment that predicted concentrations of substances in the area. Results of the modelling assessment were then used in a health risk assessment for selected community and health receptors. However, a comparison of the dispersion modelling results with available monitoring data showed disagreement, which suggested that predictions of existing and future concentrations may need improvement. This report investigated possible explanations for the differences between dispersion model predictions and monitoring data, with a particular focus on the Fort McKay area. The modelling and monitoring data were compared, and modifications to the dispersion model were recommended. Methods for developing background concentrations for the community of Fort McKay were also discussed. It was noted that emission numbers in the report were consistent with the emission inventory with the exception of nitrogen oxide (NO{sub x}). Concentrations of sulphur dioxide (SO{sub 2}) and nitrogen dioxide (NO{sub 2}) in Fort McKay were also accurate. However, the modelling did not include any community emissions of particulate matter and did a poor job at estimating the ambient concentrations at Fort McKay, as well as hydrogen sulfide (H{sub 2}S) concentrations. It was suggested that changes in weather during the year and the effect of unusual or upset emissions may have contributed to differences. It was concluded that the use of seasonally variable emissions for compounds released from fugitive sources in dispersion modelling reports should be reconsidered. It was also suggested that
National Research Council Canada - National Science Library
John B Holmes; Ken G Dodds; Michael A Lee
2017-01-01
.... While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix...
Littenberg, Tyson B; Coughlin, Scott; Kalogera, Vicky
2016-01-01
Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects' spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by $>5\\sigma$ using simple-precession waveforms and in excess of $20\\sigma$ when non-spinning templates are employed This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find searched areas are up to a ...
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
Energy Technology Data Exchange (ETDEWEB)
Siegrist, R.L.
1990-01-01
Due to their widespread use throughout commerce and industry, volatile hydrocarbons such as toluene, trichloroethene and 1,1,1-trichloroethene routinely appear as principal pollutants in contaminated sites throughout the US and abroad. As a result, quantitative determination of soil system hydrocarbons is necessary to confirm the presence of contamination and its nature and extent; to assess site risks and the need for cleanup; to evaluate remedial technologies; and to verify the performance of a selected alternative. Decisions regarding these issues have far-reaching impacts and ideally should be based on accurate measurements of soil hydrocarbon concentrations. Unfortunately, quantification of volatile hydrocarbons in soils is extremely difficult and there is normally little understanding of the accuracy and precision of these measurements. Rather, the assumption is often implicitly made that the hydrocarbon data are sufficiently accurate for the intended purpose. This paper presents a discussion of measurement error potential when quantifying volatile hydrocarbons in soils and outlines some methods for understanding and managing these errors. 11 refs., 1 fig., 4 tabs.
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
An analytic technique for statistically modeling random atomic clock errors in estimation
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters
Köhlinger, F; Eriksen, M
2015-01-01
Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is signifi...
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Estimation of contaminant subslab concentration in vapor intrusion.
Yao, Yijun; Pennell, Kelly G; Suuberg, Eric M
2012-09-15
This study is concerned with developing a method to estimate subslab perimeter crack contaminant concentration for structures built atop a vapor source. A simple alternative to the widely-used but restrictive one-dimensional (1-D) screening models is presented and justified by comparing to predictions from a three-dimensional (3-D) CFD model. A series of simulations were prepared for steady-state transport of a non-biodegradable contaminant in homogenous soil for different structure construction features and site characteristics. The results showed that subslab concentration does not strongly depend on the soil diffusivity, indoor air pressure, or foundation footprint size. It is determined by the geometry of the domain, represented by a characteristic length which is the ratio of foundation depth to source depth. An extension of this analytical approximation was developed for multi-layer soil cases.
Wong, Chee-Woon; Chong, Kok-Keong; Tan, Ming-Hui
2015-07-27
This paper presents an approach to optimize the electrical performance of dense-array concentrator photovoltaic system comprised of non-imaging dish concentrator by considering the circumsolar radiation and slope error effects. Based on the simulated flux distribution, a systematic methodology to optimize the layout configuration of solar cells interconnection circuit in dense array concentrator photovoltaic module has been proposed by minimizing the current mismatch caused by non-uniformity of concentrated sunlight. An optimized layout of interconnection solar cells circuit with minimum electrical power loss of 6.5% can be achieved by minimizing the effects of both circumsolar radiation and slope error.
Can I Just Check...? Effects of Edit Check Questions on Measurement Error and Survey Estimates
Directory of Open Access Journals (Sweden)
Lugtig Peter
2014-03-01
Full Text Available Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to questions earlier in the same interview to query apparent inconsistencies in responses; dependent interviewing uses responses from prior interviews to query apparent inconsistencies over time.Weuse data from three waves of the British Household Panel Survey (BHPS to assess the effects of edit checks on estimates, and data from an experimental study carried out in the context of the BHPS, where survey responses were linked to individual administrative records, to assess the effects on measurement error. The findings suggest that interviewing methods without edit checks underestimate non-labour household income in the lower tail of the income distribution. The effects on estimates derived from total household income, such as poverty rates or transition rates into and out of poverty, are small.
Estimating random errors due to shot noise in backscatter lidar observations
Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang
2006-06-01
We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.
García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F
2016-02-01
Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Gaines Wilson, J.; Zawar-Reza, Peyman
Epidemiological studies relating air pollution to health effects often estimate personal exposure to particulate matter using values from a central ambient monitoring site as a proxy. However, when there is a significant amount of variation in particulate concentrations across an urban area, the use of central sites may result in exposure misclassification that induces error in long-term cohort epidemiological study designs. When spatially dense monitoring data are not available, advanced dispersion models may offer one solution to the problem of accurately characterising intraurban particulate concentrations across an area. This study presents results from an intraurban assessment of The Air Pollution Model (TAPM)—an Integrated Meteorological-Emission (IME) Model. Particles less than 10 μm in aerodynamic diameter (PM 10) were modelled and compared with a dense intraurban monitoring network in Christchurch, New Zealand, a city with high winter levels of particulate air pollution. Despite the area's high intraurban concentration variability, and meteorological and topographical complexity, the model performed satisfactorily overall, with mean observed and modelled concentrations of 42.9 and 43.4 μg m -3, respectively, while the mean Index of Agreement (IOA) between individual sites was 0.60 and the mean systematic RMSE was 16.9 μg m -3. Most of the systematic error in the model was due to coarse spatial resolution of the local emission inventory and complex meteorology attributed to localised convergence of drainage flows, especially on the western and southern fringes of the urban area. Given further improvements in site-specific estimates within urban areas, IME models such as TAPM may be a viable alternative to central sites for estimating personal exposure in longer-term (monthly or annual) cohort epidemiological studies.
Recursive prediction error methods for online estimation in nonlinear state-space models
Directory of Open Access Journals (Sweden)
Dag Ljungquist
1994-04-01
Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.
Institute of Scientific and Technical Information of China (English)
Yan-ping Chen; Yun-qing Huang
2001-01-01
Improved L2-error estimates are computed for mixed finite element methods for second order nonlinear hyperbolic equations. Results are given for the continuous-time case. The convergence of the values for both the scalar function and the flux is demonstrated. The technique used here covers the lowest-order Raviart-Thomas spaces, as well as the higherorder spaces. A second paper will present the analysis of a fully discrete scheme (Numer.Math. J. Chinese Univ. vol.9, no.2, 2000, 181-192).
On-line estimation of concentration parameters in fermentation processes
Institute of Scientific and Technical Information of China (English)
XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he
2005-01-01
It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.
Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods
Energy Technology Data Exchange (ETDEWEB)
Saether, Bjoern
1997-12-31
This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.
Institute of Scientific and Technical Information of China (English)
2008-01-01
Two residual-based a posteriori error estimators of the nonconforming Crouzeix-Raviart element are derived for elliptic problems with Dirac delta source terms.One estimator is shown to be reliable and efficient,which yields global upper and lower bounds for the error in piecewise W1,p seminorm.The other one is proved to give a global upper bound of the error in Lp-norm.By taking the two estimators as refinement indicators,adaptive algorithms are suggested,which are experimentally shown to attain optimal convergence orders.
Institute of Scientific and Technical Information of China (English)
Xianmin Xu; Zhiping Li
2009-01-01
An a posteriori error estimator is obtained for a nonconforming finite element approx-imation of a linear elliptic problem, which is derived from a corresponding unbounded domain problem by applying a nonlocal approximate artificial boundary condition. Our method can be easily extended to obtain a class of a posteriori error estimators for various conforming and nonconforming finite element approximations of problems with different artificial boundary conditions. The reliability and efficiency of our a posteriori error esti-mator are rigorously proved and axe verified by numerical examples.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2014-01-01
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...
Prediction and standard error estimation for a finite universe total when a stratum is not sampled
Energy Technology Data Exchange (ETDEWEB)
Wright, T.
1994-01-01
In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.
Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.
Directory of Open Access Journals (Sweden)
Timothy B Hallett
Full Text Available BACKGROUND: The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured. METHODS: We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence. RESULTS: If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant. CONCLUSIONS: The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.
Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei
2016-04-01
In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with
Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.
Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R
2002-06-07
We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.
Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass
Directory of Open Access Journals (Sweden)
Dennis J. Dunning
2002-01-01
Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Energy Technology Data Exchange (ETDEWEB)
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans.
Directory of Open Access Journals (Sweden)
Fernando Racimo
2016-04-01
Full Text Available When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s. Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters-including drift times and admixture rates-for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African. The method is implemented in a C++ program called 'Demographic Inference with Contamination and Error' (DICE. We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X, we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%.
Hybrid Optimization Approach for the Design of Mechanisms Using a New Error Estimator
Directory of Open Access Journals (Sweden)
A. Sedano
2012-01-01
Full Text Available A hybrid optimization approach for the design of linkages is presented. The method is applied to the dimensional synthesis of mechanism and combines the merits of both stochastic and deterministic optimization. The stochastic optimization approach is based on a real-valued evolutionary algorithm (EA and is used for extensive exploration of the design variable space when searching for the best linkage. The deterministic approach uses a local optimization technique to improve the efficiency by reducing the high CPU time that EA techniques require in this kind of applications. To that end, the deterministic approach is implemented in the evolutionary algorithm in two stages. The first stage is the fitness evaluation where the deterministic approach is used to obtain an effective new error estimator. In the second stage the deterministic approach refines the solution provided by the evolutionary part of the algorithm. The new error estimator enables the evaluation of the different individuals in each generation, avoiding the removal of well-adapted linkages that other methods would not detect. The efficiency, robustness, and accuracy of the proposed method are tested for the design of a mechanism in two examples.
mBEEF: An accurate semi-local Bayesian error estimation density functional
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
Approximate Damped Oscillatory Solutions for Compound KdV-Burgers Equation and Their Error Estimates
Institute of Scientific and Technical Information of China (English)
Wei-guo ZHANG; Yan ZHAO; Xiao-yan TENG
2012-01-01
In this paper,we focus on studying approximate solutions of damped oscillatory solutions of the compound KdV-Burgers equation and their error estimates.We employ the theory of planar dynamical systems to study traveling wave solutions of the compound KdV-Burgers equation.We obtain some global phase portraits under different parameter conditions as well as the existence of bounded traveling wave solutions.Furthermore,we investigate the relations between the behavior of bounded traveling wave solutions and the dissipation coefficient r of the equation.We obtain two critical values of r,and find that a bounded traveling wave appears as a kink profile solitary wave if |r| is greater than or equal to some critical value,while it appears as a damped oscillatory wave if |r| is less than some critical value.By means of analysis and the undetermined coefficients method,we find that the compound KdV-Burgers equation only has three kinds of bell profile solitary wave solutions without dissipation.Based on the above discussions and according to the evolution relations of orbits in the global phase portraits,we obtain all approximate damped oscillatory solutions by using the undetermined coefficients method.Finally,using the homogenization principle,we establish the integral equations reflecting the relations between exact solutions and approximate solutions of damped oscillatory solutions.Moreover,we also give the error estimates for these approximate solutions.
Emiyati; Manoppo, Anneke K. S.; Budhiman, Syarif
2017-01-01
Total Suspended Matter (TSM) are fine materials which suspended and floated in water column. Water column could be turbid due to TSM that reduces the depth of light penetration and causes low productivity in coastal waters. The objective of this study was to estimate TSM concentration using Landsat 8 OLI data in Lombok coastal waters Indonesia by using empirical and analytic approach between three visible bands of Landsat 8 OLI subsurface reflectance (OLI 2, OLI 3 and OLI 4) and field data. The accuracy of model was tested using error estimation and statistical analysis. Colour of waters, transparency and reflectance values showed, the clear water has high transparency and low reflectance while the turbid waters have low transparency and high reflectance. The estimation of TSM concentrations in Lombok coastal waters are 0.39 to 20.7 mg/l. TSM concentrations becoming high when it is on coast and low when it is far from the coast. The statistical analysis showed that TSM model from Landsat 8 OLI data could describe TSM from field measurement with correlation 91.8% and RMSE value 0.52. The t-test and f-test showed that the TSM derived from Landsat 8 OLI and TSM measured in field were not significantly different.
Directory of Open Access Journals (Sweden)
Mustafa I. Alheety
2009-11-01
Full Text Available This paper introduces a new biased estimator, namely, almost unbiased Liu estimator (AULE of β for the multiple linear regression model with heteroscedastics and/or correlated errors and suffers from the problem of multicollinearity. The properties of the proposed estimator is discussed and the performance over the generalized least squares (GLS estimator, ordinary ridge regression (ORR estimator (Trenkler, 1984, and Liu estimator (LE (Kaçiranlnar, 2003 in terms of matrix mean square error criterion are investigated. The optimal values of d for Liu and almost unbiased Liu estimators have been obtained. Finally, a simulation study has been conducted which indicated that under certain conditions on d, the proposed estimator performed well compared to GLS, ORR and LE estimators.
[Comparison of chlorophyll a concentration estimation in Taihu Lake using different methods].
Li, Yun-Liang; Zhang, Yun-Lin; Li, Jun-Sheng; Liu, Ming-Liang
2009-03-15
Based on the measured remote sensing reflectance and concurrent chlorophyll a (Chl-a) concentration in Taihu Lake from January 7 to 9 and July 29 to August 1, 2006, this study comparatively analyzed the estimation precision of three-band-model, two-band-model, reflectance peak position method and first derivative method, and further discussed the feasibility of the four methods to estimate Chl-a using remote sensing image. The data set of two samplings contained widely variable total suspended matter (12.24-285.20 mg x L(-1), Chl-a (4.83-155.11 microg x L(-1)) and chromophoric dissolved organic matte absorption coefficient at 440 nm (0.27-2.36 m(-1)). The former four methods all got high precisions on Chl-a concentration estimation in Taihu Lake with determination coefficients (r2) of 0.813, 0.838, 0.872 and 0.819, respectively. The root mean square error (RMSE) between measured and estimated Chi-a concentrations using the four models was 13.04, 12.12, 13.41 and 12.13 microg x L(-1), respectively, and the relatively error (RE) was 35.5%, 34.9%, 24.6% and 41.8%, respectively. Although the reflectance peak position method had the highest estimation precision, it was difficult to be applied on remote sensing image due to lacking spectral channel. The three-band-model and two-band-model had higher estimation precisions than the first order differential method and good application foreground in Chl-a retrieval using remote sensing image. The r2, RMSE, RE of [R(-1) (665)- R(-1) (709)] x R(754) in three-band-model and R(709)/R(681) in two-band-model based on simulation MERIS data were 0.788, 13.87 microg x L(-1), 37.3%, and 0.815, 12.96 microg x L(-1), 34.8%, respectively. The results in this study demonstrated MERIS data could be applied to retrieve Chl-a concentration in turbid Case-II waters as Taihu Lake.
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Kaye, Jason; Yang, Chao
2014-01-01
Kohn-Sham density functional theory is one of the most widely used electronic structure theories. The recently developed adaptive local basis functions form an accurate and systematically improvable basis set for solving Kohn-Sham density functional theory using discontinuous Galerkin methods, requiring a small number of basis functions per atom. In this paper we develop residual-based a posteriori error estimates for the adaptive local basis approach, which can be used to guide non-uniform basis refinement for highly inhomogeneous systems such as surfaces and large molecules. The adaptive local basis functions are non-polynomial basis functions, and standard a posteriori error estimates for $hp$-refinement using polynomial basis functions do not directly apply. We generalize the error estimates for $hp$-refinement to non-polynomial basis functions. We demonstrate the practical use of the a posteriori error estimator in performing three-dimensional Kohn-Sham density functional theory calculations for quasi-2D...
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Estimation of vapor concentration in a changing environment
Warren, Russell E.; Vanderbeek, Richard G.
2003-08-01
A key limitation on the use of two-wavelength DIAL or its multi-spectral generalization is the unknown spectral structure of the topographically backscattered lidar signals in the absence of the target materials. Although some of the factors responsible for the background spectral structure can be measured in advance, others, such as the terrain differences are highly variable and usually unknown. For applications to tactical reconnainssance and high-altitude surveillance where the background is continuously changing, the inability to account for the background can seriously degrade sensor performance. This study describes a method for estimating both the spectral dependence of the background as well as the path-integrated concentration, or CL, from the same data set using dual Kalman filtering. The idea is to run parallel filters that estimate the background and CL using input from the other filter. The approach is illustrated on a variety of synthetic data sets and signal injections into background data collected by the U.S. Army WILDCAT sensor at Dugway Proving Ground.
Nusser, Sarah M.; Fuller, Wayne A.; Guenther, Patricia M.
1995-01-01
The authors have developed a method for estimating the distribution of an unobservable random variable from data that are subject to considerable measurement error and that arise from a mixture of two populations, one having a single-valued distribution and the other having a continuous unimodal distribution. The method requires that at least two positive intakes be recorded for a subset of the subjects in order to estimate the variance components for the measurement error model. Published in...
Xue, Hongqi; Wu, Hulin; 10.1214/09-AOS784
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge--Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the $p$-order numerical algorithm goes to zero at a rate faster than $n^{-1/(p\\wedge4)}$, the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we h...
Directory of Open Access Journals (Sweden)
Jingyan Song
2011-07-01
Full Text Available The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR with Radial Basis Function (RBF kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels.
Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky
2016-03-01
Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.
Robust estimation of error covariance functions in GRACE gravity field determination
Behzadpour, Saniya; Mayer-Gürr, Torsten; Flury, Jakob
2016-04-01
The accurate modelling of the stochastic behaviour of the GRACE mission observations is an important task in the time variable gravity field determination. After fitting a model in the least-squares sense, it is necessary to determine whether all the necessary model assumptions, i.e., independence, normality, and homoscedasticity of the residuals, are valid before performing inference. Checking the model assumptions for the range rate residuals, it has been concluded that one of the major problems in the range rate observations is the outliers in the data. One way to deal with this problem is to implement a robust estimation procedure to dampen the effect of observations that would be highly influential if least squares were used. In addition to insensitivity to outliers, such a procedure tends to leave the residuals associated with outliers large, therefore making the identification of outliers much easier. Implementation of this procedure using robust error covariance functions, comparison of different robust estimators, e.g., Huber's and Tukey's estimators, and assessing the detected outliers with respect to temporal and spatial patterns are discussed.
Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y
2012-06-01
The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its
Abuzaid, Abdulrahman I.
2014-09-01
Efficient receiver designs for cooperative communication systems are becoming increasingly important. In previous work, cooperative networks communicated with the use of $L$ relays. As the receiver is constrained, it can only process $U$ out of $L$ relays. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this paper, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal $U-{opt}$. Furthermore, this receiver provides the freedom to choose $U ≤ U-{opt}$ for each frame depending upon the tolerable difference allowed for mean square error (MSE). Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings without affecting the BER performance of the system. Furthermore, in this paper the effect of channel estimation errors is investigated on the MSE performance of the amplify-and-forward (AF) cooperative relaying system.
Bahşı, Ayşe Kurt; Yalçınbaş, Salih
2016-01-01
In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.
Random weighting error estimation for the inversion result of finite-fault rupture history
Ai, Yin-Shuang; Zheng, Tian-Yu; He, Yu-Mei
1999-07-01
Since the non-unique solution exists in the inversion for finite-fault rupture history, the random weighting method has been used to estimate error of the inversion results in this paper. The resolution distributions of slip amplitude, rake, rupture time and rise time on the finite fault were deduced quantitatively by model calculation. By using the random weighting method, the inversion results of Taiwan Strait earthquake and Myanmar-China boundary earthquake show that the parameters related to the rupture centers of two events have the highest resolution, and the solution are the most reliable; otherwise the resolution of the slip amplitudes and rise time on the finite-fault boundary is low.
GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A; Donnarumma, Annamaria; Conti, Ian Fenech; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep; Hogg, David W; Huff, Eric M; Jee, M James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C; Marshall, Philip J; Meyers, Joshua E; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Mboula, Fred Maurice Ngole; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D; Shan, Huanyuan; Sheldon, Erin S; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Adami, Kristian Zarb; Zhang, Jun; Zuntz, Joe
2014-01-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety a...
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Energy Technology Data Exchange (ETDEWEB)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Regularization and error estimates for asymmetric backward nonhomogeneous heat equations in a ball
Directory of Open Access Journals (Sweden)
Le Minh Triet
2016-09-01
Full Text Available The backward heat problem (BHP has been researched by many authors in the last five decades; it consists in recovering the initial distribution from the final temperature data. There are some articles [1,2,3] related the axi-symmetric BHP in a disk but the study in spherical coordinates is rare. Therefore, we wish to study a backward problem for nonhomogenous heat equation associated with asymmetric final data in a ball. In this article, we modify the quasi-boundary value method to construct a stable approximate solution for this problem. As a result, we obtain regularized solution and a sharp estimates for its error. At the end, a numerical experiment is provided to illustrate our method.
Lateral velocity estimation bias due to beamforming delay errors (Conference Presentation)
Rodriguez-Molares, Alfonso; Fadnes, Solveig; Swillens, Abigail; Løvstakken, Lasse
2017-03-01
An artefact has recently been reported [1,2] in the estimation of the lateral blood velocity using speckle tracking. This artefact shows as a net velocity bias in presence of strong spatial velocity gradients such as those that occur at the edges of the filling jets in the heart. Even though this artifact has been found both in vitro and in simulated data, its causes are still undescribed. Here we demonstrate that a potential source of this artefact can be traced to smaller errors in the beamforming setup. By inserting a small offset in the beamforming delay, one can artificially create a net lateral movement in the speckle in areas of high velocity gradient. That offset does not have a strong impact in the image quality and can easily go undetected.
Yu, Guozhu; Carstensen, Carsten
2011-01-01
Assumed stress hybrid methods are known to improve the performance of standard displacement-based finite elements and are widely used in computational mechanics. The methods are based on the Hellinger-Reissner variational principle for the displacement and stress variables. This work analyzes two existing 4-node hybrid stress quadrilateral elements due to Pian and Sumihara [Int. J. Numer. Meth. Engng, 1984] and due to Xie and Zhou [Int. J. Numer. Meth. Engng, 2004], which behave robustly in numerical benchmark tests. For the finite elements, the isoparametric bilinear interpolation is used for the displacement approximation, while different piecewise-independent 5-parameter modes are employed for the stress approximation. We show that the two schemes are free from Poisson-locking, in the sense that the error bound in the a priori estimate is independent of the relevant Lame constant $\\lambda$. We also establish the equivalence of the methods to two assumed enhanced strain schemes. Finally, we derive reliable ...
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
Energy Technology Data Exchange (ETDEWEB)
Smith, Thomas Michael; Shadid, John N; Pawlowski, Roger P; Cyr, Eric C; Wildey, Timothy Michael
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi
2013-06-01
A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has
Asymptotically exact Discontinuous Galerkin error estimates for linear symmetric hyperbolic systems
Adjerid, S.; Weinhart, T.
2014-01-01
We present an a posteriori error analysis for the discontinuous Galerkin discretization error of first-order linear symmetric hyperbolic systems of partial differential equations with smooth solutions. We perform a local error analysis by writing the local error as a series and showing that its lead
Bayram, Adem; Kankal, Murat; Onsoy, Hizir
2012-07-01
Suspended sediment concentration (SSC) is generally determined from the direct measurement of sediment concentration of river or from sediment transport equations. Direct measurement is very costly and cannot be conducted for all river gauge stations. Therefore, correct estimation of suspended sediment amount carried by a river is very important in terms of water pollution, channel navigability, reservoir filling, fish habitat, river aesthetics and scientific interests. This study investigates the feasibility of using turbidity as a surrogate for SSC as in situ turbidity meters are being increasingly used to generate continuous records of SSC in rivers. For this reason, regression analysis (RA) and artificial neural networks (ANNs) were employed to estimate SSC based on in situ turbidity measurements. The SSC was firstly experimentally determined for the surface water samples collected from the six monitoring stations along the main branch of the stream Harsit, Eastern Black Sea Basin, Turkey. There were 144 data for each variable obtained on a fortnightly basis during March 2009 and February 2010. In the ANN method, the used data for training, testing and validation sets are 108, 24 and 12 of total 144 data, respectively. As the results of analyses, the smallest mean absolute error (MAE) and root mean square error (RMSE) values for validation set were obtained from the ANN method with 11.40 and 17.87, respectively. However these were 19.12 and 25.09 for RA. It was concluded that turbidity could be a surrogate for SSC in the streams, and the ANNs method used for the estimation of SSC provided acceptable results.
Estimating microalgae Synechococcus nidulans daily biomass concentration using neuro-fuzzy network
Directory of Open Access Journals (Sweden)
Vitor Badiale Furlong
2013-02-01
Full Text Available In this study, a neuro-fuzzy estimator was developed for the estimation of biomass concentration of the microalgae Synechococcus nidulans from initial batch concentrations, aiming to predict daily productivity. Nine replica experiments were performed. The growth was monitored daily through the culture medium optic density and kept constant up to the end of the exponential phase. The network training followed a full 3³ factorial design, in which the factors were the number of days in the entry vector (3,5 and 7 days, number of clusters (10, 30 and 50 clusters and internal weight softening parameter (Sigma (0.30, 0.45 and 0.60. These factors were confronted with the sum of the quadratic error in the validations. The validations had 24 (A and 18 (B days of culture growth. The validations demonstrated that in long-term experiments (Validation A the use of a few clusters and high Sigma is necessary. However, in short-term experiments (Validation B, Sigma did not influence the result. The optimum point occurred within 3 days in the entry vector, 10 clusters and 0.60 Sigma and the mean determination coefficient was 0.95. The neuro-fuzzy estimator proved a credible alternative to predict the microalgae growth.
Directory of Open Access Journals (Sweden)
Thomas P Eisele
Full Text Available Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Estimating the Concentration of Large Raindrops from Polarimetric Radar and Disdrometer Observations
Carey, Lawrence D.; Petersen, Walter A; Gatlink, Patrick N.
2013-01-01
Estimation of rainfall integral parameters, including radar observables, and empirical relations between them are sensitive to the truncation of the drop size distribution (DSD), particularly at the large drop end. The sensitivity of rainfall integral parameters to the maximum drop diameter (D(sub max)) is exacerbated at C-band since resonance effects are pronounced for large drops in excess of 5 mm diameter (D). Due to sampling limitations, it is often difficult to reliably estimate D(sub max) with disdrometers. The resulting uncertainties in D(sub max0 potentially increase errors in radar retrieval methods, particularly at C-band, that rely on disdrometer observations for DSD input to radar models. In fact, D(sub max) is typically an assumed DSD parameter in the development of radar retrieval methods. Because of these very uncertainties, it is difficult to independently confirm disdrometer estimates of D(sub max) with polarimetric radar observations. A couple of approaches can be taken to reduce uncertainty in large drop measurement. Longer integration times can be used for the collection of larger disdrometer samples. However, integration periods must be consistent with a radar resolution volume (RRV) and the temporal and spatial scales of the physical processes affecting the DSD therein. Multiple co-located disdrometers can be combined into a network to increase the sample size within a RRV. However, over a reasonable integration period, a single disdrometer sample volume is many orders of magnitudes less than a RRV so it is not practical to devise a network of disdrometers that has an equivalent volume to a typical RRV. Since knowledge of DSD heterogeneity and large drop occurrence in time and space is lacking, the specific accuracy or even general representativeness of disdrometer based D(sub max) and large drop concentration estimates within a RRV are currently unknown. To address this complex issue, we begin with a simpler question. Is the frequency of
Zhang, Yi-bo; Zhang, Yun-lin; Zha, Yong; Shi, Kun; Zhou, Yong-qiang; Wang, Ming-zhu
2015-01-01
Total suspended matter (TSM) plays an important role in determining the underwater light climate, which then affects the lake primary production. Therefore, TSM concentration is an important parameter for lake water quality and water environment assessment. This study developed an empirical estimation model and presented the spatial distribution of TSM concentration for the relatively clear Xin'anjiang Reservoir based on the in situ ground data and the matching Landsat 8 data. The results showed that Band 2, Band 3 and Band 8 of Landsat 8 data were the sensitive bands of TSM estimation in Xin'anjiang Reservoir with the linear determination coefficients of 0.37, 0.51 and 0.42, respectively. However, the linear models using Band 2, Band 3 and Band 8 could not give a reasonable and satisfying estimation accuracy. Therefore, a three-band combination estimation model of TSM concentration using Band 2, Band 3 and Band. 8 was calibrated and validated to improve the TSM concentration estimation accuracy. The determination coefficient, mean relative error and root mean square error were 0.92, 11% and 0.16 mg x L(-1), respectively for the three-band combination model. Overall, the TSM concentration was relatively low in Xin'anjiang Reservoir, ranging from 0. 04 to 24. 54 mg x L(-1) with a mean value of 2.19 mg x L(-1). Higher TSM concentrations were distributed in the nearshore zones and small bays such as Fengshuling bay, Fenkou bay, Weiping bay, Anyang bay, Dashu bay and Linqi bay, which were affected by input rivers rainfall and human dredging activity. Therefore, this study demonstrated that the combination of three bands using Landsat 8 data could be used to estimate the TSM concentration in the relatively clear Xin'anjiang Reservoir.
Granato, Gregory E.; Smith, Kirk P.
1999-01-01
Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for
Silva, Felipe O; Hemerly, Elder M; Leite Filho, Waldemar C
2017-02-23
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.
Directory of Open Access Journals (Sweden)
Felipe O. Silva
2017-02-01
Full Text Available This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC problem of strapdown inertial navigation systems (SINS. The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.
Macbeth, Gilbert M; Broderick, Damien; Ovenden, Jennifer R; Buckworth, Rik C
2011-11-01
Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both 'true non-recaptures' being falsely accepted as recaptures (Type I errors) and 'true recaptures' being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.
RESIDUAL A POSTERIORI ERROR ESTIMATE OF A NEW TWO-LEVEL METHOD FOR STEADY NAVIER-STOKES EQUATIONS
Institute of Scientific and Technical Information of China (English)
Chunfeng REN; Yichen MA
2006-01-01
Residual-based a posteriori error estimate for conforming finite element solutions of incompressible Navier-Stokes equations, which is computed with a new two-level method that is different from Volker John, is derived. A posteriori error estimate contains additional terms in comparison to the estimate for the solution obtained by the standard finite element method. The importance of the additional terms in the error estimates is investigated by studying their asymptotic behavior. For optimal scaled meshes, these bounds are not of higher order than the convergence of discrete solution. The two-level method aims to solve the nonlinear problem on a coarse grid with less computational work,then to solve the linear problem on a fine grid, which is superior to the usual finite element method solving a similar nonlinear problem on the fine grid.
On the error of estimating the sparsest solution of underdetermined linear systems
Babaie-Zadeh, Massoud; Mohimani, Hosein
2011-01-01
Let A be an n by m matrix with m>n, and suppose that the underdetermined linear system As=x admits a sparse solution s0 for which ||s0||_0 < 1/2 spark(A). Such a sparse solution is unique due to a well-known uniqueness theorem. Suppose now that we have somehow a solution s_hat as an estimation of s0, and suppose that s_hat is only `approximately sparse', that is, many of its components are very small and nearly zero, but not mathematically equal to zero. Is such a solution necessarily close to the true sparsest solution? More generally, is it possible to construct an upper bound on the estimation error ||s_hat-s0||_2 without knowing s0? The answer is positive, and in this paper we construct such a bound based on minimal singular values of submatrices of A. We will also state a tight bound, which is more complicated, but besides being tight, enables us to study the case of random dictionaries and obtain probabilistic upper bounds. We will also study the noisy case, that is, where x=As+n. Moreover, we will s...
Bounding the error on bottom estimation for multi-angle swath bathymetry sonar
Mullins, Geoff K.; Bird, John S.
2005-04-01
With the recent introduction of multi-angle swath bathymetry (MASB) sonar to the commercial marketplace (e.g., Benthos Inc., C3D sonar, 2004), additions must be made to the current sonar lexicon. The correct interpretation of measurements made with MASB sonar, which uses filled transducer arrays to compute angle-of-arrival information (AOA) from backscattered signal, is essential not only for mapping, but for applications such as statistical bottom classification. In this paper it is shown that aside from uncorrelated channel to channel noise, there exists a tradeoff between effects that govern the error bounds on bottom estimation for surfaces having shallow grazing angle and surfaces distributed along a radial arc centered at the transducer. In the first case, as the bottom aligns with the radial direction to the receiver, footprint shift and shallow grazing angle effects dominate the uncertainty in physical bottom position (surface aligns along a single AOA). Alternatively, if signal from a radial arc arrives, a single AOA is usually estimated (not necessarily at the average location of the surface). Through theoretical treatment, simulation, and field measurements, the aforementioned factors affecting MASB bottom mapping are examined. [Work supported by NSERC.
Effects of pointing errors on receiver performance for parabolic dish solar concentrators
Hughes, R. O.
1978-01-01
The effects of dynamic (moving) pointing errors on the performance of solar thermal receivers is investigated. Only point focusing types of solar collectors are considered. The key element in the study is the analytical derivation of the intercept factor that relates pointing errors to captured energy at the receiver. A detailed example using typical parameter values is modeled on the digital computer and demonstrates the theory and the dynamic nature of the problem.
Luo, X.; Ou, J.; Yuan, Y.; Gao, J.; Jin, X.; Zhang, K.; Xu, H.
2008-08-01
It is well known that the key problem associated with network-based real-time kinematic (RTK) positioning is the estimation of systematic errors of GPS observations, such as residual ionospheric delays, tropospheric delays, and orbit errors, particularly for medium-long baselines. Existing methods dealing with these systematic errors are either not applicable for making estimations in real-time or require additional observations in the computation. In both cases, the result is a difficulty in performing rapid positioning. We have developed a new strategy for estimating the systematic errors for near real-time applications. In this approach, only two epochs of observations are used each time to estimate the parameters. In order to overcome severe ill-conditioned problems of the normal equation, the Tikhonov regularization method is used. We suggest that the regularized matrix be constructed by combining the a priori information of the known coordinates of the reference stations, followed by the determination of the corresponding regularized parameter. A series of systematic errors estimation can be obtained using a session of GPS observations, and the new process can assist in resolving the integer ambiguities of medium-long baselines and in constructing the virtual observations for the virtual reference station. A number of tests using three medium- to long-range baselines (from tens of kilometers to longer than 1000 kilometers) are used to validate the new approach. Test results indicate that the coordinates of three baseline lengths derived are in the order of several centimeters after the systematical errors are successfully removed. Our results demonstrate that the proposed method can effectively estimate systematic errors in the near real-time for medium-long GPS baseline solutions.
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Real-time Adaptive Kinematic Model Estimation of Concentric Tube Robots.
Kim, Chunwoo; Ryu, Seok Chang; Dupont, Pierre E
2015-01-01
Kinematic models of concentric tube robots have matured from considering only tube bending to considering tube twisting as well as external loading. While these models have been demonstrated to approximate actual behavior, modeling error can be significant for medical applications that often call for positioning accuracy of 1-2mm. As an alternative to moving to more complex models, this paper proposes using sensing to adaptively update model parameters during robot operation. Advantages of this method are that the model is constantly tuning itself to provide high accuracy in the region of the workspace where it is currently operating. It also adapts automatically to changes in robot shape and compliance associated with the insertion and removal of tools through its lumen. As an initial exploration of this approach, a recursive on-line estimator is proposed and evaluated experimentally.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Energy Technology Data Exchange (ETDEWEB)
Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.
2015-05-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.
Estimating the background covariance error for the Global Data Assimilation System of CPTEC/INPE
Bastarz, C. F.; Goncalves, L.
2013-05-01
The global data assimilation system at CPTEC/INPE, named G3Dvar is based in the Gridoint Statistical Interpolation (GSI/NCEP/GMAO) and in the general circulation model from that same center (GCM/CPTEC/INPE). The G3Dvar is a tri-dimensional variational data assimilation system that uses a Background Error Covariance Matrix (BE) fixed (in its current implementation, it uses the matrix from Global Forecast System - GFS/NCEP). The goal of this work is to present the preliminary results of the calculation of the new BE based on the GCM/CPTEC/INPE using a methodology similar to the one used for the GSI/WRFDA, called gen_be. The calculation is done in 5 distinct steps in the analysis increment space. (a) stream function and potential velocity are determined from the wind fields; (b) the mean of the stream function and potential velocity are calculated in order to obtain the perturbation fields for the remaing variables (streamfunction, potencial velocity, temperature, relative humidity and surface pressure); (c) the covariances of the perturbation fields, regression coeficients and balance between streamfunction, temperature and surface pressure are estimated. For this particular system, i.e. GCM/CPTEC/INPE, the necessity for constrains towards the statistical balance between streamfuncion and potential velocity, temperature and surface pressure will be evaluated as well as the how it affects the BE matrix calculation. Hence, this work will investigate the necessary procedures for calculating BE and show how does that differs from the standard calculation and how it is calibrated/adjusted based on the GCM/CPTEC/INPE. Results from a comparison between the main differences between the GFS BE and the newly calculated GCM/CPTEC/INPE BE are discussed in addition to an impact study using the different background error covariance matrices.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Institute of Scientific and Technical Information of China (English)
Zheng Wei; Hsu Hou-Tse; Zhong Min; Yun Mei-Juan
2009-01-01
Firstly,the new combined error model of cumulative geoid height influenced by four error sources,including the inter-satellite range-rate of an interferometric laser (K-band) ranging system,the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer,is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach.Secondly,the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985×10-1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL),and the cumulative geoid height error is 5.825×10-2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km.The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward,and the dependability of the combined error model is validated.Finally,the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes.
Kiessling, Jonas
2014-05-06
Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.
The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling
DEFF Research Database (Denmark)
Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard
In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore...
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Lin, Lin
2016-01-01
We present the first systematic work for deriving a posteriori error estimates for general non-polynomial basis functions in an interior penalty discontinuous Galerkin (DG) formulation for solving eigenvalue problems associated with second order linear operators. Eigenvalue problems of such types play important roles in scientific and engineering applications, particularly in theoretical chemistry, solid state physics and material science. Based on the framework developed in [{\\it L. Lin, B. Stamm, http://dx.doi.org/10.1051/m2an/2015069}] for second order PDEs, we develop residual type upper and lower bound error estimates for measuring the a posteriori error for eigenvalue problems. The main merit of our method is that the method is parameter-free, in the sense that all but one solution-dependent constants appearing in the upper and lower bound estimates are explicitly computable by solving local and independent eigenvalue problems, and the only non-computable constant can be reasonably approximated by a com...
Directory of Open Access Journals (Sweden)
Jie Liu
2014-01-01
discusses the nonconforming rotated Q1 finite element computable upper bound a posteriori error estimate of the boundary value problem established by M. Ainsworth and obtains efficient computable upper bound a posteriori error indicators for the eigenvalue problem associated with the boundary value problem. We extend the a posteriori error estimate to the Steklov eigenvalue problem and also derive efficient computable upper bound a posteriori error indicators. Finally, through numerical experiments, we verify the validity of the a posteriori error estimate of the boundary value problem; meanwhile, the numerical results show that the a posteriori error indicators of the eigenvalue problem and the Steklov eigenvalue problem are effective.
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Singh, S. K.; Kumar, P.; Turbelin, G.; Issartel, J. P.; Feiz, A. A.; Ngae, P.; Bekka, N.
2016-12-01
In accidental release scenarios, a reliable prediction of origin and strength of unknown releases is attentive for emergency response authorities in order to ensure safety and security towards human health and environment. The accidental scenarios might involve one or more simultaneous releases emitting the same contaminant. In this case, the field of plumes may overlap significantly and the sampled concentrations may become the mixture of the concentrations originating from all the releases. The study addresses an inverse modelling procedure for identifying the origin and strength of known number of simultaneous releases from the sampled mixture of concentrations. A two-step inversion algorithm is developed in conjunction with an adjoint representation of source-receptor relationship. The computational efficiency is increased by deriving the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from Fusion Field Trials, involving multiple (two, three and four sources) release experiments emitting Propylene, in September 2007 at Dugway Proving Ground, Utah, USA. The release locations are retrieved, on average, within 45 m to the true sources. The analysis of posterior uncertainties shows that the variations in location error and retrieved strength are within 10 m and 0.07%, respectively. Further, the inverse modelling is tested using 4-16 measurements in retrieval of four releases and found to be working reasonably well (within 146±79 m). The sensitivity studies highlight that the covariance statistics, model representativeness errors, source-receptor distance, distance between localized sources, monitoring design and number of measurements plays an important role in multiple source estimation.
Lan, Yung-Yao; Tsuang, Ben-Jei; Keenlyside, Noel; Wang, Shu-Lun; Arthur Chen, Chen-Tung; Wang, Bin-Jye; Liu, Tsun-Hsien
2010-07-01
It is well known that skin sea surface temperature (SSST) is different from bulk sea surface temperature (BSST) by a few tenths of a degree Celsius. However, the extent of the error associated with dry deposition (or uptake) estimation by using BSST is not well known. This study tries to conduct such an evaluation using the on-board observation data over the South China Sea in the summers of 2004 and 2006. It was found that when a warm layer occurred, the deposition velocities using BSST were underestimated within the range of 0.8-4.3%, and the absorbed sea surface heat flux was overestimated by 21 W m -2. In contrast, under cool skin only conditions, the deposition velocities using BSST were overestimated within the range of 0.5-2.0%, varying with pollutants and the absorbed sea surface heat flux was underestimated also by 21 W m -2. Scale analysis shows that for a slightly soluble gas (e.g., NO 2, NO and CO), the error in the solubility estimation using BSST is the major source of the error in dry deposition estimation. For a highly soluble gas (e.g., SO 2), the error in the estimation of turbulent heat fluxes and, consequently, aerodynamic resistance and gas-phase film resistance using BSST is the major source of the total error. In contrast, for a medium soluble gas (e.g., O 3 and CO 2) both the errors from the estimations of the solubility and aerodynamic resistance are important. In addition, deposition estimations using various assumptions are discussed. The largest uncertainty is from the parameterizations for chemical enhancement factors. Other important areas of uncertainty include: (1) various parameterizations for gas-transfer velocity; (2) neutral-atmosphere assumption; (3) using BSST as SST, and (4) constant pH value assumption.
Miller, S. M.; Fung, I.; Liu, J.; Hayek, M. N.; Andrews, A. E.
2014-09-01
Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM-LETKF" to quantify two aspects of the transport errors: error variances (standard deviations) and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation), depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological uncertainties
Directory of Open Access Journals (Sweden)
S. M. Miller
2014-09-01
Full Text Available Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM–LETKF" to quantify two aspects of the transport errors: error variances (standard deviations and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation, depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological
Guerdoux, Simon; Fourment, Lionel
2007-05-01
An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Smith, G. L.; Bess, T. D.; Minnis, P.
1983-01-01
The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.
Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward
2016-01-01
We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.
Directory of Open Access Journals (Sweden)
Xue Li
2015-01-01
Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
U.S. Environmental Protection Agency — We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part...
An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Jesper
2015-01-01
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.
An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Jesper
2015-01-07
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.
Road Invariant Extended Kalman Filter for an Enhanced Estimation of GPS Errors using Lane Markings
2015-01-01
International audience; Satellite positioning is a key technology for autonomous navigation in outdoors environments. When using standalone computation with mono-frequency receivers, positioning errors are not in accordance with the required performance. Nevertheless, since errors are strongly time-correlated, a GPS fix is quite informative if a shaping model of the positioning errors is carefully handled and made possible by exteroceptive sensors. When driving in a road with a camera detecti...
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N
2014-04-01
Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
Van der Zee, K.G.; Van Brummelen, E.H.; De Borst, R.
2010-01-01
We develop duality-based a posteriori error estimates for functional outputs of solutions of free-boundary problems via shape-linearization principles. To derive an appropriate dual (linearized adjoint) problem, we linearize the domain dependence of the very weak form and goal functional of interest
Directory of Open Access Journals (Sweden)
Breno Carvalho
2013-10-01
Full Text Available This paper purpose is to implement a computational program to estimate the states (complex nodal voltages of a power system and showing that the largest normalized residual (LNR test fails many times. The chosen solution method was the Weighted Least Squares (WLS. Once the states are estimated a gross error analysis is made with the purpose to detect and identify the measurements that may contain gross errors (GEs, which can interfere in the estimated states, leading the process to an erroneous state estimation. If a measure is identified as having error, it is discarded of the measurement set and the whole process is remade until all measures are within an acceptable error threshold. To validate the implemented software there have been done several computer simulations in the IEEE´s systems of 6 and 14 buses, where satisfactory results were obtained. Another purpose is to show that even a widespread method as the LNR test is subjected to serious conceptual flaws, probably due to a lack of mathematical foundation attendance in the methodology. The paper highlights the need for continuous improvement of the employed techniques and a critical view, on the part of the researchers, to see those types of failures.
Goswami, Deepjyoti
2013-05-01
In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.
DEFF Research Database (Denmark)
Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar
2014-01-01
and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results...
Directory of Open Access Journals (Sweden)
Haitao Che
2011-01-01
Full Text Available We investigate a H1-Galerkin mixed finite element method for nonlinear viscoelasticity equations based on H1-Galerkin method and expanded mixed element method. The existence and uniqueness of solutions to the numerical scheme are proved. A priori error estimation is derived for the unknown function, the gradient function, and the flux.
Institute of Scientific and Technical Information of China (English)
Lie-heng Wang
2000-01-01
The abstract L2-norm error estimate of nonconforming finite element method is established. The uniformly L2-norm error estimate is obtained for the noncon-forming finite element method for the second order elliptic problem with the lowest regularity, i.e., in the case that the solution u ∈ H1(Ω) only. It is also shown that the L2-norm error bound we obtained is one order heigher than the energe-norm error bound.
An empirical method of RH correction for satellite estimation of ground-level PM concentrations
Wang, Zifeng; Chen, Liangfu; Tao, Jinhua; Liu, Yang; Hu, Xuefei; Tao, Minghui
2014-10-01
A hygroscopic growth model suitable for local aerosol characteristics and their temporal variations is necessary for accurate satellite retrieval of ground-level particulate matters (PM). This study develops an empirical method to correct the relative humidity (RH) impact on aerosol extinction coefficient and to further derive PM concentrations from satellite observations. Not relying on detailed information of aerosol chemical and microphysical properties, this method simply uses the in-situ observations of visibility (VIS), RH and PM concentrations to characterize aerosol hygroscopicity, and thus makes the RH correction capable of supporting the satellite PM estimations with large spatial and temporal coverage. In this method, the aerosol average mass extinction efficiency (αext) is used to describe the general hygroscopic growth behaviors of the total aerosol populations. The association between αext and RH is obtained through empirical model fitting, and is then applied to carry out RH correction. Nearly one year of in-situ measurements of VIS, RH and PM10 in Beijing urban area are collected for this study and RH correction is made for each of the months with sufficient data samples. The correlations between aerosol extinction coefficients and PM10 concentrations are significantly improved, with the monthly correlation R2 increasing from 0.26-0.63 to 0.49-0.82, as well as the whole dataset's R2 increasing from 0.36 to 0.68. PM10 concentrations are retrieved through RH correction and validated for each season individually. Good agreements between the retrieved and observed PM10 concentrations are found in all seasons, with R2 ranging from 0.54 in spring to 0.73 in fall, and the mean relative errors ranging from -2.5% in winter to -10.8% in spring. Based on the satellite AOD and the model simulated aerosol profiles, surface PM10 over Beijing area is retrieved through the RH correction. The satellite retrieved PM10 and those observed at ground sites agree well
Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems
Asner, Liya
2012-01-01
We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.