WorldWideScience

Sample records for concentration estimation errors

  1. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  2. Estimation of Total Error in DWPF Reported Radionuclide Inventories

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    This report investigates the impact of random errors due to measurement and sampling on the reported concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch. The objective of this investigation is to estimate the variance of the total error in reporting these radionuclide concentrations

  3. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    Science.gov (United States)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  4. Assessing concentration uncertainty estimates from passive microwave sea ice products

    Science.gov (United States)

    Meier, W.; Brucker, L.; Miller, J. A.

    2017-12-01

    Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.

  5. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Estimation of total error in DWPF reported radionuclide inventories. Revision 1

    International Nuclear Information System (INIS)

    Edwards, T.B.

    1995-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site is required to determine and report the radionuclide inventory of its glass product. For each macro-batch, the DWPF will report both the total amount (in curies) of each reportable radionuclide and the average concentration (in curies/gram of glass) of each reportable radionuclide. The DWPF is to provide the estimated error of these reported values of its radionuclide inventory as well. The objective of this document is to provide a framework for determining the estimated error in DWPF's reporting of these radionuclide inventories. This report investigates the impact of random errors due to measurement and sampling on the total amount of each reportable radionuclide in a given macro-batch. In addition, the impact of these measurement and sampling errors and process variation are evaluated to determine the uncertainty in the reported average concentrations of radionuclides in DWPF's filled canister inventory resulting from each macro-batch

  7. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    Science.gov (United States)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter

  8. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  9. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  11. Estimation of Branch Topology Errors in Power Networks by WLAN State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Rae [Soonchunhyang University(Korea); Song, Kyung Bin [Kei Myoung University(Korea)

    2000-06-01

    The purpose of this paper is to detect and identify topological errors in order to maintain a reliable database for the state estimator. In this paper, a two stage estimation procedure is used to identify the topology errors. At the first stage, the WLAV state estimator which has characteristics to remove bad data during the estimation procedure is run for finding out the suspected branches at which topology errors take place. The resulting residuals are normalized and the measurements with significant normalized residuals are selected. A set of suspected branches is formed based on these selected measurements; if the selected measurement if a line flow, the corresponding branch is suspected; if it is an injection, then all the branches connecting the injection bus to its immediate neighbors are suspected. A new WLAV state estimator adding the branch flow errors in the state vector is developed to identify the branch topology errors. Sample cases of single topology error and topology error with a measurement error are applied to IEEE 14 bus test system. (author). 24 refs., 1 fig., 9 tabs.

  12. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  13. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  14. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  15. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  16. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  17. Comparing computing formulas for estimating concentration ratios

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Simpson, J.C.

    1984-03-01

    This paper provides guidance on the choice of computing formulas (estimators) for estimating concentration ratios and other ratio-type measures of radionuclides and other environmental contaminant transfers between ecosystem components. Mathematical expressions for the expected value of three commonly used estimators (arithmetic mean of ratios, geometric mean of ratios, and the ratio of means) are obtained when the multivariate lognormal distribution is assumed. These expressions are used to explain why these estimators will not in general give the same estimate of the average concentration ratio. They illustrate that the magnitude of the discrepancies depends on the magnitude of measurement biases, and on the variances and correlations associated with spatial heterogeneity and measurement errors. This paper also reports on a computer simulation study that compares the accuracy of eight computing formulas for estimating a ratio relationship that is constant over time and/or space. Statistical models appropriate for both controlled spiking experiments and observational field studies for either normal or lognormal distributions are considered. 24 references, 15 figures, 7 tables

  18. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  19. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  20. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  1. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  2. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  3. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  4. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  5. Error Covariance Estimation of Mesoscale Data Assimilation

    National Research Council Canada - National Science Library

    Xu, Qin

    2005-01-01

    The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...

  6. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  7. Estimation of the measurement error of eccentrically installed orifice plates

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Neil; Hodgkinson, Edwin; Reader-Harris, Michael

    2005-07-01

    The presentation discusses methods for simulation and estimation of flow measurement errors. The main conclusions are: Computational Fluid Dynamics (CFD) simulation methods and published test measurements have been used to estimate the error of a metering system over a period when its orifice plates were eccentric and when leaking O-rings allowed some gas to bypass the meter. It was found that plate eccentricity effects would result in errors of between -2% and -3% for individual meters. Validation against test data suggests that these estimates of error should be within 1% of the actual error, but it is unclear whether the simulations over-estimate or under-estimate the error. Simulations were also run to assess how leakage at the periphery affects the metering error. Various alternative leakage scenarios were modelled and it was found that the leakage rate has an effect on the error, but that the leakage distribution does not. Correction factors, based on the CFD results, were then used to predict the system's mis-measurement over a three-year period (tk)

  8. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  9. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  10. Using cell phone location to assess misclassification errors in air pollution exposure estimation.

    Science.gov (United States)

    Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong

    2018-02-01

    Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Fractional kalman filter to estimate the concentration of air pollution

    Science.gov (United States)

    Vita Oktaviana, Yessy; Apriliani, Erna; Khusnul Arif, Didik

    2018-04-01

    Air pollution problem gives important effect in quality environment and quality of human’s life. Air pollution can be caused by nature sources or human activities. Pollutant for example Ozone, a harmful gas formed by NOx and volatile organic compounds (VOCs) emitted from various sources. The air pollution problem can be modeled by TAPM-CTM (The Air Pollution Model with Chemical Transport Model). The model shows concentration of pollutant in the air. Therefore, it is important to estimate concentration of air pollutant. Estimation method can be used for forecast pollutant concentration in future and keep stability of air quality. In this research, an algorithm is developed, based on Fractional Kalman Filter to solve the model of air pollution’s problem. The model will be discretized first and then it will be estimated by the method. The result shows that estimation of Fractional Kalman Filter has better accuracy than estimation of Kalman Filter. The accuracy was tested by applying RMSE (Root Mean Square Error).

  12. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  13. Fisher classifier and its probability of error estimation

    Science.gov (United States)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  14. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  16. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  17. Estimating the acute health effects of coarse particulate matter accounting for exposure measurement error.

    Science.gov (United States)

    Chang, Howard H; Peng, Roger D; Dominici, Francesca

    2011-10-01

    In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.

  18. Estimation error algorithm at analysis of beta-spectra

    International Nuclear Information System (INIS)

    Bakovets, N.V.; Zhukovskij, A.I.; Zubarev, V.N.; Khadzhinov, E.M.

    2005-01-01

    This work describes the estimation error algorithm at the operations with beta-spectrums, as well as compares the theoretical and experimental errors by the processing of beta-channel's data. (authors)

  19. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, Jack L.

    1999-04-21

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  20. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  1. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  2. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  3. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  4. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  5. Selection of anchor values for human error probability estimation

    International Nuclear Information System (INIS)

    Buffardi, L.C.; Fleishman, E.A.; Allen, J.A.

    1989-01-01

    There is a need for more dependable information to assist in the prediction of human errors in nuclear power environments. The major objective of the current project is to establish guidelines for using error probabilities from other task settings to estimate errors in the nuclear environment. This involves: (1) identifying critical nuclear tasks, (2) discovering similar tasks in non-nuclear environments, (3) finding error data for non-nuclear tasks, and (4) establishing error-rate values for the nuclear tasks based on the non-nuclear data. A key feature is the application of a classification system to nuclear and non-nuclear tasks to evaluate their similarities and differences in order to provide a basis for generalizing human error estimates across tasks. During the first eight months of the project, several classification systems have been applied to a sample of nuclear tasks. They are discussed in terms of their potential for establishing task equivalence and transferability of human error rates across situations

  6. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  7. Subroutine library for error estimation of matrix computation (Ver. 1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Shizawa, Yoshihisa; Kishida, Norio

    1999-03-01

    'Subroutine Library for Error Estimation of Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the linear system's solutions or the Hermitian matrices' eigenvalues. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calculate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The subroutines for error estimation of Hermitian matrix eigenvalues derive the error ranges of the eigenvalues according to the Korn-Kato's formula. The test matrix generators supply the matrices appeared in the mathematical research, the ones randomly generated and the ones appeared in the application programs. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  8. Estimation of Alcohol Concentration of Red Wine Based on Cole-Cole Plot

    Science.gov (United States)

    Watanabe, Kota; Taka, Yoshinori; Fujiwara, Osamu

    To evaluate the quality of wine, we previously measured the complex relative permittivity of wine in the frequency range from 10 MHz to 6 GHz with a network analyzer, and suggested a possibility that the maturity and alcohol concentration of wine can simultaneously be estimated from the Cole-Cole plot. Although the absolute accuracy has not been examined yet, this method will enable one to estimate the alcohol concentration of alcoholic beverages without any distillation equipment simply. In this study, to investigate the estimation accuracy of the alcohol concentration of wine by its Cole-Cole plots, we measured the complex relative permittivity of pure water and diluted ethanol solution from 100 MHz to 40 GHz, and obtained the dependence of the Cole-Cole plot parameters on alcohol concentration and temperature. By using these results as calibration data, we estimated the alcohol concentration of red wine from the Cole-Cole plots, which was compared with the measured one based on a distillation method. As a result, we have confirmed that the estimated alcohol concentration of red wine agrees with the measured results in an absolute error by less than 1 %.

  9. Estimation of error fields from ferromagnetic parts in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, A. Bonito [Fusion for Energy (Spain); Chiariello, A.G.; Formisano, A.; Martone, R. [Ass. EURATOM/ENEA/CREATE, Dip. di Ing. Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, I-81031 Napoli (Italy); Portone, A., E-mail: alfredo.portone@f4e.europa.eu [Fusion for Energy (Spain); Testoni, P. [Fusion for Energy (Spain)

    2013-10-15

    Highlights: ► The paper deals with error fields generated in ITER by magnetic masses. ► Magnetization state is computed from simplified FEM models. ► Closed form expressions adopted for the flux density of magnetized parts are given. ► Such expressions allow to simplify the estimation of the effect of iron pieces (or lack of) on error field. -- Abstract: Error fields in tokamaks are small departures from the exact axisymmetry of the ideal magnetic field configuration. Their reduction below a threshold value by the error field correction coils is essential since sufficiently large static error fields lead to discharge disruption. The error fields are originated not only by magnets fabrication and installation tolerances, by the joints and by the busbars, but also by the presence of ferromagnetic elements. It was shown that superconducting joints, feeders and busbars play a secondary effect; however in order to estimate of the importance of each possible error field source, rough evaluations can be very useful because it can provide an order of magnitude of the correspondent effect and, therefore, a ranking in the request for in depth analysis. The paper proposes a two steps procedure. The first step aims to get the approximate magnetization state of ferromagnetic parts; the second aims to estimate the full 3D error field over the whole volume using equivalent sources for magnetic masses and taking advantage from well assessed approximate closed form expressions, well suited for the far distance effects.

  10. Error Estimation for Indoor 802.11 Location Fingerprinting

    DEFF Research Database (Denmark)

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene

    2009-01-01

    providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position error...

  11. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  12. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  13. Are Low-order Covariance Estimates Useful in Error Analyses?

    Science.gov (United States)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  14. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  15. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  16. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    Science.gov (United States)

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  17. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Directory of Open Access Journals (Sweden)

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  18. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Science.gov (United States)

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  19. Estimation of error components in a multi-error linear regression model, with an application to track fitting

    International Nuclear Information System (INIS)

    Fruehwirth, R.

    1993-01-01

    We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)

  20. About Error in Measuring Oxygen Concentration by Solid-Electrolyte Sensors

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2008-01-01

    Full Text Available The paper evaluates additional errors while measuring oxygen concentration in a gas mixture by a solid-electrolyte cell. Experimental dependences of additional errors caused by changes in temperature in a sensor zone, discharge of gas mixture supplied to a sensor zone, partial pressure in the gas mixture and fluctuations in oxygen concentrations in the air.

  1. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  2. Estimating soil zinc concentrations using reflectance spectroscopy

    Science.gov (United States)

    Sun, Weichao; Zhang, Xia

    2017-06-01

    Soil contamination by heavy metals has been an increasingly severe threat to nature environment and human health. Efficiently investigation of contamination status is essential to soil protection and remediation. Visible and near-infrared reflectance spectroscopy (VNIRS) has been regarded as an alternative for monitoring soil contamination by heavy metals. Generally, the entire VNIR spectral bands are employed to estimate heavy metal concentration, which lacks interpretability and requires much calculation. In this study, 74 soil samples were collected from Hunan Province, China and their reflectance spectra were used to estimate zinc (Zn) concentration in soil. Organic matter and clay minerals have strong adsorption for Zn in soil. Spectral bands associated with organic matter and clay minerals were used for estimation with genetic algorithm based partial least square regression (GA-PLSR). The entire VNIR spectral bands, the bands associated with organic matter and the bands associated with clay minerals were incorporated as comparisons. Root mean square error of prediction, residual prediction deviation, and coefficient of determination (R2) for the model developed using combined bands of organic matter and clay minerals were 329.65 mg kg-1, 1.96 and 0.73, which is better than 341.88 mg kg-1, 1.89 and 0.71 for the entire VNIR spectral bands, 492.65 mg kg-1, 1.31 and 0.40 for the organic matter, and 430.26 mg kg-1, 1.50 and 0.54 for the clay minerals. Additionally, in consideration of atmospheric water vapor absorption in field spectra measurement, combined bands of organic matter and absorption around 2200 nm were used for estimation and achieved high prediction accuracy with R2 reached 0.640. The results indicate huge potential of soil reflectance spectroscopy in estimating Zn concentrations in soil.

  3. BAYES-HEP: Bayesian belief networks for estimation of human error probability

    International Nuclear Information System (INIS)

    Karthick, M.; Senthil Kumar, C.; Paul, Robert T.

    2017-01-01

    Human errors contribute a significant portion of risk in safety critical applications and methods for estimation of human error probability have been a topic of research for over a decade. The scarce data available on human errors and large uncertainty involved in the prediction of human error probabilities make the task difficult. This paper presents a Bayesian belief network (BBN) model for human error probability estimation in safety critical functions of a nuclear power plant. The developed model using BBN would help to estimate HEP with limited human intervention. A step-by-step illustration of the application of the method and subsequent evaluation is provided with a relevant case study and the model is expected to provide useful insights into risk assessment studies

  4. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    Science.gov (United States)

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  5. Human error probability estimation using licensee event reports

    International Nuclear Information System (INIS)

    Voska, K.J.; O'Brien, J.N.

    1984-07-01

    Objective of this report is to present a method for using field data from nuclear power plants to estimate human error probabilities (HEPs). These HEPs are then used in probabilistic risk activities. This method of estimating HEPs is one of four being pursued in NRC-sponsored research. The other three are structured expert judgment, analysis of training simulator data, and performance modeling. The type of field data analyzed in this report is from Licensee Event reports (LERs) which are analyzed using a method specifically developed for that purpose. However, any type of field data or human errors could be analyzed using this method with minor adjustments. This report assesses the practicality, acceptability, and usefulness of estimating HEPs from LERs and comprehensively presents the method for use

  6. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  7. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  8. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  9. Estimation of a beam centering error in the JAERI AVF cyclotron

    International Nuclear Information System (INIS)

    Fukuda, M.; Okumura, S.; Arakawa, K.; Ishibori, I.; Matsumura, A.; Nakamura, N.; Nara, T.; Agematsu, T.; Tamura, H.; Karasawa, T.

    1999-01-01

    A method for estimating a beam centering error from a beam density distribution obtained by a single radial probe has been developed. Estimation of the centering error is based on an analysis of radial beam positions in the direction of the radial probe. Radial motion of a particle is described as betatron oscillation around an accelerated equilibrium orbit. By fitting the radial beam positions of several consecutive turns to an equation of the radial motion, not only amplitude of the centering error but also frequency of the radial betatron oscillation and energy gain per turn can be evaluated simultaneously. The estimated centering error amplitude was consistent with a result of an orbit simulation. This method was exceedingly helpful for minimizing the centering error of a 10 MeV proton beam during the early stages of acceleration. A well-centered beam was obtained by correcting the magnetic field with a first harmonic produced by two pairs of harmonic coils. In order to push back an orbit center to a magnet center, currents of the harmonic coils were optimized on the basis of the estimated centering error amplitude. (authors)

  10. Estimation of the human error probabilities in the human reliability analysis

    International Nuclear Information System (INIS)

    Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei

    2006-01-01

    Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)

  11. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  12. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  13. Constrained motion estimation-based error resilient coding for HEVC

    Science.gov (United States)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  14. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  15. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  16. Estimation of formaldehyde concentration in working environment by using 14C-labeled formaldehyde

    International Nuclear Information System (INIS)

    Hamada, Masana; Kitano, Katsuhiro; Miketa, Daigo

    2010-01-01

    It became to be ruled to check the concentration of formaldehyde as the estimation of airborne concentration in working environment since March, 2009 and then, liquid chromatography and easily analyzing equipment methods, mainly gas detector tube method, are given as the official methods. When we estimate the airborne formaldehyde concentration in a work place where only formalin is used as a formaldehyde emission source such as a room for pathological examination, it is not necessary to separate formaldehyde and therefore not very reasonable to use liquid chromatography including troublesome procedures. On the other hand, gas detector tube method is convenient but has possibility of causing errors by individual differences. The errors might cause the additional differences of the control classes and of the measures to prevent workers from being exposed to harmful materials. In order to solve these problems and to exam alternative analyzing method for formaldehyde, we tried to measure the concentration by using 14 C-labelled one. After 14 C-labelled formaldehyde solution is put into the emission source of formaldehyde (formalin solution), the vaporized samples are taken by silica gel tubes quantitatively as usual at the estimation of airborne concentration in working environment. By counting the desorpted amount of radioactivity from silica gel, it was revealed that the obtained concentrations of formaldehyde are correspond to both the calculated values and the values indicated on the gas detector tubes at various concentrations. In this study, we used the amount below the lower activity limits of radioactive material. Except the users who have radioisotope controlled area, we are allowed to use 14 C-labeled materials below 10 MBq without being regulated under the Law Concerning Prevention of Radiation Hazards. When we use a little amount of 14 C-formaldehyde at formaldehyde using area to check the concentration of vaporized formaldehyde, this method was found to

  17. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  18. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  19. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  20. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  1. Verification of unfold error estimates in the unfold operator code

    International Nuclear Information System (INIS)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. copyright 1997 American Institute of Physics

  2. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  3. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    Science.gov (United States)

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  4. Tracking and shape errors measurement of concentrating heliostats

    Science.gov (United States)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  5. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  6. PERFORMANCE OF THE ZERO FORCING PRECODING MIMO BROADCAST SYSTEMS WITH CHANNEL ESTIMATION ERRORS

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Liu Zhanli; Wang Yan; You Xiaohu

    2007-01-01

    In this paper, the effect of channel estimation errors upon the Zero Forcing (ZF) precoding Multiple Input Multiple Output Broadcast (MIMO BC) systems was studied. Based on the two kinds of Gaussian estimation error models, the performance analysis is conducted under different power allocation strategies. Analysis and simulation show that if the covariance of channel estimation errors is independent of the received Signal to Noise Ratio (SNR), imperfect channel knowledge deteriorates the sum capacity and the Bit Error Rate (BER) performance severely. However, under the situation of orthogonal training and the Minimum Mean Square Error (MMSE) channel estimation, the sum capacity and BER performance are consistent with those of the perfect Channel State Information (CSI)with only a performance degradation.

  7. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  8. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  9. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    Science.gov (United States)

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio

  10. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    Science.gov (United States)

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  11. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  12. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Pluim, Josien P.W.; Styner, M.A.; Angelini, E.D.

    2017-01-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation

  13. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  14. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  15. Remote one-qubit information concentration and decoding of operator quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsu Liyi

    2007-01-01

    We propose the general scheme of remote one-qubit information concentration. To achieve the task, the Bell-correlated mixed states are exploited. In addition, the nonremote one-qubit information concentration is equivalent to the decoding of the quantum error-correction code. Here we propose how to decode the stabilizer codes. In particular, the proposed scheme can be used for the operator quantum error-correction codes. The encoded state can be recreated on the errorless qubit, regardless how many bit-flip errors and phase-flip errors have occurred

  16. An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation

    Institute of Scientific and Technical Information of China (English)

    Xiaogu ZHENG

    2009-01-01

    An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.

  17. Estimating Canopy Nitrogen Concentration in Sugarcane Using Field Imaging Spectroscopy

    Directory of Open Access Journals (Sweden)

    Marc Souris

    2012-06-01

    Full Text Available The retrieval of nutrient concentration in sugarcane through hyperspectral remote sensing is widely known to be affected by canopy architecture. The goal of this research was to develop an estimation model that could explain the nitrogen variations in sugarcane with combined cultivars. Reflectance spectra were measured over the sugarcane canopy using a field spectroradiometer. The models were calibrated by a vegetation index and multiple linear regression. The original reflectance was transformed into a First-Derivative Spectrum (FDS and two absorption features. The results indicated that the sensitive spectral wavelengths for quantifying nitrogen content existed mainly in the visible, red edge and far near-infrared regions of the electromagnetic spectrum. Normalized Differential Index (NDI based on FDS(750/700 and Ratio Spectral Index (RVI based on FDS(724/700 are best suited for characterizing the nitrogen concentration. The modified estimation model, generated by the Stepwise Multiple Linear Regression (SMLR technique from FDS centered at 410, 426, 720, 754, and 1,216 nm, yielded the highest correlation coefficient value of 0.86 and Root Mean Square Error of the Estimate (RMSE value of 0.033%N (n = 90 with nitrogen concentration in sugarcane. The results of this research demonstrated that the estimation model developed by SMLR yielded a higher correlation coefficient with nitrogen content than the model computed by narrow vegetation indices. The strong correlation between measured and estimated nitrogen concentration indicated that the methods proposed in this study could be used for the reliable diagnosis of nitrogen quantity in sugarcane. Finally, the success of the field spectroscopy used for estimating the nutrient quality of sugarcane allowed an additional experiment using the polar orbiting hyperspectral data for the timely determination of crop nutrient status in rangelands without any requirement of prior

  18. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  19. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  20. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  1. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  2. Complementarity based a posteriori error estimates and their properties

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2033-2046 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : error majorant * a posteriori error estimates * method of hypercircle Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2012 http://www.sciencedirect.com/science/article/pii/S0378475411001509

  3. A new procedure for estimating the cell temperature of a high concentrator photovoltaic grid connected system based on atmospheric parameters

    International Nuclear Information System (INIS)

    Fernández, Eduardo F.; Almonacid, Florencia

    2015-01-01

    Highlights: • Concentrating grid-connected systems are working at maximum power point. • The operating cell temperature is inherently lower than at open circuit. • Two novel methods for estimating the cell temperature are proposed. • Both predict the operating cell temperature from atmospheric parameters. • Experimental results show that both methods perform effectively. - Abstract: The working cell temperature of high concentrator photovoltaic systems is a crucial parameter when analysing their performance and reliability. At the same time, due to the special features of this technology, the direct measurement of the cell temperature is very complex and is usually obtained by using different indirect methods. High concentrator photovoltaic modules in a system are operating at maximum power since they are connected to an inverter. So that, their cell temperature is lower than the cell temperature of a module at open-circuit voltage since an important part of the light power density is converted into electricity. In this paper, a procedure for indirectly estimating the cell temperature of a high concentrator photovoltaic system from atmospheric parameters is addressed. Therefore, this new procedure has the advantage that is valid for estimating the cell temperature of a system at any location of interest if the atmospheric parameters are available. To achieve this goal, two different methods are proposed: one based on simple mathematical relationships and another based on artificial intelligent techniques. Results show that both methods predicts the cell temperature of a module connected to an inverter with a low margin of error with a normalised root mean square error lower or equal than 3.3%, an absolute root mean square error lower or equal than 2 °C, a mean absolute error lower or equal then 1.5 °C, and a mean bias error and a mean relative error almost equal to 0%

  4. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    Science.gov (United States)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  5. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  6. Estimation of chromatic errors from broadband images for high contrast imaging

    Science.gov (United States)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  7. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  8. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  9. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  10. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  11. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    Science.gov (United States)

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  14. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  15. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  16. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  17. Prediction of Monte Carlo errors by a theory generalized to treat track-length estimators

    International Nuclear Information System (INIS)

    Booth, T.E.; Amster, H.J.

    1978-01-01

    Present theories for predicting expected Monte Carlo errors in neutron transport calculations apply to estimates of flux-weighted integrals sampled directly by scoring individual collisions. To treat track-length estimators, the recent theory of Amster and Djomehri is generalized to allow the score distribution functions to depend on the coordinates of two successive collisions. It has long been known that the expected track length in a region of phase space equals the expected flux integrated over that region, but that the expected statistical error of the Monte Carlo estimate of the track length is different from that of the flux integral obtained by sampling the sum of the reciprocals of the cross sections for all collisions in the region. These conclusions are shown to be implied by the generalized theory, which provides explicit equations for the expected values and errors of both types of estimators. Sampling expected contributions to the track-length estimator is also treated. Other general properties of the errors for both estimators are derived from the equations and physically interpreted. The actual values of these errors are then obtained and interpreted for a simple specific example

  18. GPS/DR Error Estimation for Autonomous Vehicle Localization

    Directory of Open Access Journals (Sweden)

    Byung-Hyun Lee

    2015-08-01

    Full Text Available Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  19. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  20. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  1. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  2. A framework to estimate probability of diagnosis error in NPP advanced MCR

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Kim, Jong Hyun; Jang, Inseok; Seong, Poong Hyun

    2018-01-01

    Highlights: •As new type of MCR has been installed in NPPs, the work environment is considerably changed. •A new framework to estimate operators’ diagnosis error probabilities should be proposed. •Diagnosis error data were extracted from the full-scope simulator of the advanced MCR. •Using Bayesian inference, a TRC model was updated for use in advanced MCR. -- Abstract: Recently, a new type of main control room (MCR) has been adopted in nuclear power plants (NPPs). The new MCR, known as the advanced MCR, consists of digitalized human-system interfaces (HSIs), computer-based procedures (CPS), and soft controls while the conventional MCR includes many alarm tiles, analog indicators, hard-wired control devices, and paper-based procedures. These changes significantly affect the generic activities of the MCR operators, in relation to diagnostic activities. The aim of this paper is to suggest a framework to estimate the probabilities of diagnosis errors in the advanced MCR by updating a time reliability correlation (TRC) model. Using Bayesian inference, the TRC model was updated with the probabilities of diagnosis errors. Here, the diagnosis error data were collected from a full-scope simulator of the advanced MCR. To do this, diagnosis errors were determined based on an information processing model and their probabilities were calculated. However, these calculated probabilities of diagnosis errors were largely affected by context factors such as procedures, HSI, training, and others, known as PSFs (Performance Shaping Factors). In order to obtain the nominal diagnosis error probabilities, the weightings of PSFs were also evaluated. Then, with the nominal diagnosis error probabilities, the TRC model was updated. This led to the proposal of a framework to estimate the nominal probabilities of diagnosis errors in the advanced MCR.

  3. Development of an integrated system for estimating human error probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

  4. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  5. A user's manual of Tools for Error Estimation of Complex Number Matrix Computation (Ver.1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi.

    1997-03-01

    'Tools for Error Estimation of Complex Number Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the complex number linear system's solutions or the Hermitian matrices' eigen values. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calulate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The error estimation subroutines for Hermitian matrix eigen values' derive the error ranges of the eigen values according to the Korn-Kato's formula. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  6. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  7. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    International Nuclear Information System (INIS)

    Kim, J G; Liu, H

    2007-01-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO 2 ), deoxyhaemoglobin (Hb) and total haemoglobin (Hb total ) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO 2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hb total ]. The error calculation has shown that even a small variation (0.01 cm -1 mM -1 ) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO 2 ], Δ[Hb] and Δ[Hb total ]. We have also observed that the error of Δ[Hb total ] is not always larger than those of Δ[HbO 2 ] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO 2 ], Δ[Hb] and Δ[Hb total ] from in vivo tissue measurements

  8. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    Science.gov (United States)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  9. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  10. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  11. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  12. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  13. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations

  14. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  15. Uncertainty quantification in a chemical system using error estimate-based mesh adaption

    International Nuclear Information System (INIS)

    Mathelin, Lionel; Le Maitre, Olivier P.

    2012-01-01

    This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)

  16. Effect of Tracking Error of Double-Axis Tracking Device on the Optical Performance of Solar Dish Concentrator

    Directory of Open Access Journals (Sweden)

    Jian Yan

    2018-01-01

    Full Text Available In this paper, a flux distribution model of the focal plane in dish concentrator system has been established based on ray tracking method. This model was adopted for researching the influence of the mirror slope error, solar direct normal irradiance, and tracking error of elevation-azimuth tracking device (EATD on the focal spot characteristics (i.e., flux distribution, geometrical shape, centroid position, and intercept factor. The tracking error transmission law of the EATD transferred to dish concentrator was also studied. The results show that the azimuth tracking error of the concentrator decreases with the increase of the concentrator elevation angle and it decreases to 0 mrad when the elevation angle is 90°. The centroid position of focal spot along x-axis and y-axis has linear relationship with azimuth and elevation tracking error of EATD, respectively, which could be used to evaluate and calibrate the tracking error of the dish concentrator. Finally, the transmission law of the EATD azimuth tracking error in solar heliostats is analyzed, and a dish concentrator using a spin-elevation tracking device is proposed, which can reduce the effect of spin tracking error on the dish concentrator. This work could provide fundamental for manufacturing precision allocation of tracking devices and developing a new type of tracking device.

  17. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  18. Estimating the Autocorrelated Error Model with Trended Data: Further Results,

    Science.gov (United States)

    1979-11-01

    Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information

  19. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  20. A bio-optical algorithm for the remote estimation of the chlorophyll-a concentration in case 2 waters

    International Nuclear Information System (INIS)

    Gitelson, Anatoly A; Gurlin, Daniela; Moses, Wesley J; Barrow, Tadd

    2009-01-01

    The objective of this work was to test the performance of a recently developed three-band model and its special case, a two-band model, for the remote estimation of the chlorophyll- a (chl-a) concentration in turbid productive case 2 waters. We specifically focused on (a) determining the ability of the models to estimate chl- a -3 , typical for coastal and estuarine waters, and (b) assessing the potential of MODIS and MERIS to estimate chl-a concentrations in turbid productive waters, using red and near-infrared (NIR) bands. Reflectance spectra and water samples were collected in 89 stations over lakes in the United States with a wide variability in optical parameters (i.e. 2.1 -3 ; 0.5 -1 ). The three-band model, using wavebands around 670, 710 and 750 nm, explains more than 89% of the chl- a variation for chl- a ranging from 2 to 20 mg m -3 and can be used to estimate chlorophyll-a concentrations with a root mean square error (RMSE) of -3 . MODIS (bands 13 and 15) and MERIS (bands 7, 9, and 10) red and NIR reflectances were simulated from the collected reflectance spectra and potential estimation errors were assessed. The MODIS two-band model is able to estimate chl- a concentrations with a RMSE of -3 for chl-a ranging from 2 to 50 mg m -3 ; however, the model loses its sensitivity for chl- a -3 . Benefiting from the higher spectral resolution of the MERIS data, the MERIS three-band model accounts for 93% of chl- a variation and is able to estimate chl-a concentrations with a RMSE of -3 for chl-a ranging from 2 to 50 mg m -3 , and a RMSE of -3 for chl-a ranging from 2 to 20 mg m -3 . These findings imply that, provided that an atmospheric correction scheme specific to the red and NIR spectral region is available, the extensive database of MODIS and MERIS images could be used to quantitatively monitor chl- a in case 2 waters.

  1. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  2. Evaluation of human error estimation for nuclear power plants

    International Nuclear Information System (INIS)

    Haney, L.N.; Blackman, H.S.

    1987-01-01

    The dominant risk for severe accident occurrence in nuclear power plants (NPPs) is human error. The US Nuclear Regulatory Commission (NRC) sponsored an evaluation of Human Reliability Analysis (HRA) techniques for estimation of human error in NPPs. Twenty HRA techniques identified by a literature search were evaluated with criteria sets designed for that purpose and categorized. Data were collected at a commercial NPP with operators responding in walkthroughs of four severe accident scenarios and full scope simulator runs. Results suggest a need for refinement and validation of the techniques. 19 refs

  3. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator

  4. Bias Errors due to Leakage Effects When Estimating Frequency Response Functions

    Directory of Open Access Journals (Sweden)

    Andreas Josefsson

    2012-01-01

    Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.

  5. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J G; Liu, H [Joint Graduate Program in Biomedical Engineering, University of Texas at Arlington/University of Texas Southwestern Medical Center at Dallas, Arlington, TX 76019 (United States)

    2007-10-21

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO{sub 2}), deoxyhaemoglobin (Hb) and total haemoglobin (Hb{sub total}) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO{sub 2} and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hb{sub total}]. The error calculation has shown that even a small variation (0.01 cm{sup -1} mM{sup -1}) in extinction coefficients can produce appreciable relative errors in quantification of {delta}[HbO{sub 2}], {delta}[Hb] and {delta}[Hb{sub total}]. We have also observed that the error of {delta}[Hb{sub total}] is not always larger than those of {delta}[HbO{sub 2}] and {delta}[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of {delta}[HbO{sub 2}], {delta}[Hb] and {delta}[Hb{sub total}] from in vivo tissue measurements.

  6. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, K.A.J.; Pluim, J.P.W.

    2018-01-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D

  7. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  9. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  10. B-spline goal-oriented error estimators for geometrically nonlinear rods

    Science.gov (United States)

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  11. Error estimation for goal-oriented spatial adaptivity for the SN equations on triangular meshes

    International Nuclear Information System (INIS)

    Lathouwers, D.

    2011-01-01

    In this paper we investigate different error estimation procedures for use within a goal oriented adaptive algorithm for the S N equations on unstructured meshes. The method is based on a dual-weighted residual approach where an appropriate adjoint problem is formulated and solved in order to obtain the importance of residual errors in the forward problem on the specific goal of interest. The forward residuals and the adjoint function are combined to obtain both economical finite element meshes tailored to the solution of the target functional as well as providing error estimates. Various approximations made to make the calculation of the adjoint angular flux more economically attractive are evaluated by comparing the performance of the resulting adaptive algorithm and the quality of the error estimators when applied to two shielding-type test problems. (author)

  12. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  13. Effect of the Absorbed Photosynthetically Active Radiation Estimation Error on Net Primary Production Estimation - A Study with MODIS FPAR and TOMS Ultraviolet Reflective Products

    International Nuclear Information System (INIS)

    Kobayashi, H.; Matsunaga, T.; Hoyano, A.

    2002-01-01

    Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small

  14. Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition

    KAUST Repository

    van der Zee, Kristoffer G.

    2010-10-27

    A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.

  15. Error estimates for discretized quantum stochastic differential inclusions

    International Nuclear Information System (INIS)

    Ayoola, E.O.

    2001-09-01

    This paper is concerned with the error estimates involved in the solution of a discrete approximation of a quantum stochastic differential inclusion (QSDI). Our main results rely on certain properties of the averaged modulus of continuity for multivalued sesquilinear forms associated with QSDI. We obtained results concerning the estimates of the Hausdorff distance between the set of solutions of the QSDI and the set of solutions of its discrete approximation. This extend the results of Dontchev and Farkhi concerning classical differential inclusions to the present noncommutative Quantum setting involving inclusions in certain locally convex space. (author)

  16. The determination of carbon dioxide concentration using atmospheric pressure ionization mass spectrometry/isotopic dilution and errors in concentration measurements caused by dryers.

    Science.gov (United States)

    DeLacy, Brendan G; Bandy, Alan R

    2008-01-01

    An atmospheric pressure ionization mass spectrometry/isotopically labeled standard (APIMS/ILS) method has been developed for the determination of carbon dioxide (CO(2)) concentration. Descriptions of the instrumental components, the ionization chemistry, and the statistics associated with the analytical method are provided. This method represents an alternative to the nondispersive infrared (NDIR) technique, which is currently used in the atmospheric community to determine atmospheric CO(2) concentrations. The APIMS/ILS and NDIR methods exhibit a decreased sensitivity for CO(2) in the presence of water vapor. Therefore, dryers such as a nafion dryer are used to remove water before detection. The APIMS/ILS method measures mixing ratios and demonstrates linearity and range in the presence or absence of a dryer. The NDIR technique, on the other hand, measures molar concentrations. The second half of this paper describes errors in molar concentration measurements that are caused by drying. An equation describing the errors was derived from the ideal gas law, the conservation of mass, and Dalton's Law. The purpose of this derivation was to quantify errors in the NDIR technique that are caused by drying. Laboratory experiments were conducted to verify the errors created solely by the dryer in CO(2) concentration measurements post-dryer. The laboratory experiments verified the theoretically predicted errors in the derived equations. There are numerous references in the literature that describe the use of a dryer in conjunction with the NDIR technique. However, these references do not address the errors that are caused by drying.

  17. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  18. On the error estimation and T-stability of the Mann iteration

    NARCIS (Netherlands)

    Maruster, Laura; Maruster, St.

    2015-01-01

    A formula of error estimation of Mann iteration is given in the case of strongly demicontractive mappings. Based on this estimation, a condition of strong convergence is obtained for the same class of mappings. T-stability for a particular case of strongly demicontractive mappings is proved. Some

  19. A posteriori error estimates for axisymmetric and nonlinear problems

    Czech Academy of Sciences Publication Activity Database

    Křížek, Michal; Němec, J.; Vejchodský, Tomáš

    2001-01-01

    Roč. 15, - (2001), s. 219-236 ISSN 1019-7168 R&D Projects: GA ČR GA201/01/1200; GA MŠk ME 148 Keywords : weigted Sobolev spaces%a posteriori error estimates%finite elements Subject RIV: BA - General Mathematics Impact factor: 0.886, year: 2001

  20. Estimating microalgae Synechococcus nidulans daily biomass concentration using neuro-fuzzy network

    Directory of Open Access Journals (Sweden)

    Vitor Badiale Furlong

    2013-02-01

    Full Text Available In this study, a neuro-fuzzy estimator was developed for the estimation of biomass concentration of the microalgae Synechococcus nidulans from initial batch concentrations, aiming to predict daily productivity. Nine replica experiments were performed. The growth was monitored daily through the culture medium optic density and kept constant up to the end of the exponential phase. The network training followed a full 3³ factorial design, in which the factors were the number of days in the entry vector (3,5 and 7 days, number of clusters (10, 30 and 50 clusters and internal weight softening parameter (Sigma (0.30, 0.45 and 0.60. These factors were confronted with the sum of the quadratic error in the validations. The validations had 24 (A and 18 (B days of culture growth. The validations demonstrated that in long-term experiments (Validation A the use of a few clusters and high Sigma is necessary. However, in short-term experiments (Validation B, Sigma did not influence the result. The optimum point occurred within 3 days in the entry vector, 10 clusters and 0.60 Sigma and the mean determination coefficient was 0.95. The neuro-fuzzy estimator proved a credible alternative to predict the microalgae growth.

  1. Estimating the approximation error when fixing unessential factors in global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sobol' , I.M. [Institute for Mathematical Modelling of the Russian Academy of Sciences, Moscow (Russian Federation); Tarantola, S. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: debora.gatelli@jrc.it; Kucherenko, S.S. [Imperial College London (United Kingdom); Mauntz, W. [Department of Biochemical and Chemical Engineering, Dortmund University (Germany)

    2007-07-15

    One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential factors. A simple function where analytical solutions are available is used to illustrate the theorem. The numerical estimation of small sensitivity indices is discussed.

  2. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph

    2016-12-08

    We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.

  3. To Error Problem Concerning Measuring Concentration of Carbon Oxide by Thermo-Chemical Sen

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2007-01-01

    Full Text Available The paper gives additional errors in respect of measuring concentration of carbon oxide by thermo-chemical sensors. A number of analytical expressions for calculation of error data and corrections for environmental factor deviations from admissible ones have been obtained in the paper

  4. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  5. Error Estimates for the Approximation of the Effective Hamiltonian

    International Nuclear Information System (INIS)

    Camilli, Fabio; Capuzzo Dolcetta, Italo; Gomes, Diogo A.

    2008-01-01

    We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting

  6. A Posteriori Error Estimates Including Algebraic Error and Stopping Criteria for Iterative Solvers

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Strakoš, Zdeněk; Vohralík, M.

    2010-01-01

    Roč. 32, č. 3 (2010), s. 1567-1590 ISSN 1064-8275 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GP201/09/P464 Institutional research plan: CEZ:AV0Z10300504 Keywords : second-order elliptic partial differential equation * finite volume method * a posteriori error estimates * iterative methods for linear algebraic systems * conjugate gradient method * stopping criteria Subject RIV: BA - General Mathematics Impact factor: 3.016, year: 2010

  7. Reducing System Artifacts in Hyperspectral Image Data Analysis with the Use of Estimates of the Error Covariance in the Data; TOPICAL

    International Nuclear Information System (INIS)

    HAALAND, DAVID M.; VAN BENTHEM, MARK H.; WEHLBURG, CHRISTINE M.; KOEHLER, IV FREDERICK W.

    2002-01-01

    Hyperspectral Fourier transform infrared images have been obtained from a neoprene sample aged in air at elevated temperatures. The massive amount of spectra available from this heterogeneous sample provides the opportunity to perform quantitative analysis of the spectral data without the need for calibration standards. Multivariate curve resolution (MCR) methods with non-negativity constraints applied to the iterative alternating least squares analysis of the spectral data has been shown to achieve the goal of quantitative image analysis without the use of standards. However, the pure-component spectra and the relative concentration maps were heavily contaminated by the presence of system artifacts in the spectral data. We have demonstrated that the detrimental effects of these artifacts can be minimized by adding an estimate of the error covariance structure of the spectral image data to the MCR algorithm. The estimate is added by augmenting the concentration and pure-component spectra matrices with scores and eigenvectors obtained from the mean-centered repeat image differences of the sample. The implementation of augmentation is accomplished by employing efficient equality constraints on the MCR analysis. Augmentation with the scores from the repeat images is found to primarily improve the pure-component spectral estimates while augmentation with the corresponding eigenvectors primarily improves the concentration maps. Augmentation with both scores and eigenvectors yielded the best result by generating less noisy pure-component spectral estimates and relative concentration maps that were largely free from a striping artifact that is present due to system errors in the FT-IR images. The MCR methods presented are general and can also be applied productively to non-image spectral data

  8. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  9. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  10. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  11. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  12. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    Science.gov (United States)

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  13. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  14. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    Science.gov (United States)

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for

  15. ERROR BOUNDS FOR SURFACE AREA ESTIMATORS BASED ON CROFTON’S FORMULA

    Directory of Open Access Journals (Sweden)

    Markus Kiderlen

    2011-05-01

    Full Text Available According to Crofton's formula, the surface area S(A of a sufficiently regular compact set A in Rd is proportional to the mean of all total projections pA (u on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u is only measured in k directions and the mean is approximated by a finite weighted sum bS(A of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule, and show that the relative error bS (A/S (A is bounded from below by the inradius of Z and from above by the circumradius of Z. Applying a strengthened isoperimetric inequality due to Bonnesen, we show that the rectangular quadrature rule does not give the best possible error bounds for d =2. In addition, we derive asymptotic behavior of the error (with increasing k in the planar case. The paper concludes with applications to surface area estimation in design-based digital stereology where we show that the weights due to Bonnesen's inequality are better than the usual weights based on the rectangular rule and almost optimal in the sense that the relative error of the surface area estimator is very close to the minimal error.

  16. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    Science.gov (United States)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  17. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  18. Estimations of Nitrogen Concentration in Sugarcane Using Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Poonsak Miphokasap

    2018-04-01

    Full Text Available This study aims to estimate the spatial variation of sugarcane Canopy Nitrogen Concentration (CNC using spectral data, which were measured from a spaceborne hyperspectral image. Stepwise Multiple Linear Regression (SMLR and Support Vector Regression (SVR were applied to calibrate and validate the CNC estimation models. The raw spectral reflectance was transformed into a First-Derivative Spectrum (FDS and absorption features to remove the spectral noise and finally used as input variables. The results indicate that the estimation models developed by non-linear SVR based Radial Basis Function (RBF kernel yield the higher correlation coefficient with CNC compared with the models computed by SMLR. The best model shows the coefficient of determination value of 0.78 and Root Mean Square Error (RMSE value of 0.035% nitrogen. The narrow sensitive spectral wavelengths for quantifying nitrogen content in the combined cultivar environments existed mainly in the electromagnetic spectrum of the visible-red, longer portion of red edge, shortwave infrared regions and far-near infrared. The most important conclusion from this experiment is that spectral signals from the space hyperspectral data contain the meaningful information for quantifying sugarcane CNC across larger geographic areas. The nutrient deficient areas could be corrected by applying suitable farm management.

  19. Effects of structural error on the estimates of parameters of dynamical systems

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  20. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  1. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  2. Improved children's motor learning of the basketball free shooting pattern by associating subjective error estimation and extrinsic feedback.

    Science.gov (United States)

    Silva, Leandro de Carvalho da; Pereira-Monfredini, Carla Ferro; Teixeira, Luis Augusto

    2017-09-01

    This study aimed at assessing the interaction between subjective error estimation and frequency of extrinsic feedback in the learning of the basketball free shooting pattern by children. 10- to 12-year olds were assigned to 1 of 4 groups combining subjective error estimation and relative frequency of extrinsic feedback (33% × 100%). Analysis of performance was based on quality of movement pattern. Analysis showed superior learning of the group combining error estimation and 100% feedback frequency, both groups receiving feedback on 33% of trials achieved intermediate results, and the group combining no requirement of error estimation and 100% feedback frequency had the poorest learning. Our results show the benefit of subjective error estimation in association with high frequency of extrinsic feedback in children's motor learning of a sport motor pattern.

  3. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    Science.gov (United States)

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  4. On the a priori estimation of collocation error covariance functions: a feasibility study

    DEFF Research Database (Denmark)

    Arabelos, D.N.; Forsberg, René; Tscherning, C.C.

    2007-01-01

    and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis...

  5. Theoretical and Experimental Investigation of Force Estimation Errors Using Active Magnetic Bearings with Embedded Hall Sensors

    DEFF Research Database (Denmark)

    Voigt, Andreas Jauernik; Santos, Ilmar

    2012-01-01

    to ∼ 20% of the nominal air gap the force estimation error is found to be reduced by the linearized force equation as compared to the quadratic force equation, which is supported by experimental results. Additionally the FE model is employed in a comparative study of the force estimation error behavior...... of AMBs by embedding Hall sensors instead of mounting these directly on the pole surfaces, force estimation errors are investigated both numerically and experimentally. A linearized version of the conventionally applied quadratic correspondence between measured Hall voltage and applied AMB force...

  6. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  7. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  8. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Psychological scaling of expert estimates of human error probabilities: application to nuclear power plant operation

    International Nuclear Information System (INIS)

    Comer, K.; Gaddy, C.D.; Seaver, D.A.; Stillwell, W.G.

    1985-01-01

    The US Nuclear Regulatory Commission and Sandia National Laboratories sponsored a project to evaluate psychological scaling techniques for use in generating estimates of human error probabilities. The project evaluated two techniques: direct numerical estimation and paired comparisons. Expert estimates were found to be consistent across and within judges. Convergent validity was good, in comparison to estimates in a handbook of human reliability. Predictive validity could not be established because of the lack of actual relative frequencies of error (which will be a difficulty inherent in validation of any procedure used to estimate HEPs). Application of expert estimates in probabilistic risk assessment and in human factors is discussed

  10. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  11. Computational Error Estimate for the Power Series Solution of Odes ...

    African Journals Online (AJOL)

    This paper compares the error estimation of power series solution with recursive Tau method for solving ordinary differential equations. From the computational viewpoint, the power series using zeros of Chebyshevpolunomial is effective, accurate and easy to use. Keywords: Lanczos Tau method, Chebyshev polynomial, ...

  12. IAS 8, Accounting Policies, Changes in Accounting Estimates and Errors – A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the revised version of the International Accounting Standard 8, Accounting Policies, Changes in Accounting Estimates and Errors. The objective of IAS 8 is to prescribe the criteria for selecting, applying and changing accounting policies, together with the accounting treatment and disclosure of changes in accounting policies, changes in accounting estimates and the corrections of errors. This article presents a closer look of the standard (o...

  13. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  14. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    Science.gov (United States)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  15. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  16. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  17. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  18. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  19. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  20. A bio-optical algorithm for the remote estimation of the chlorophyll-a concentration in case 2 waters

    Energy Technology Data Exchange (ETDEWEB)

    Gitelson, Anatoly A; Gurlin, Daniela; Moses, Wesley J; Barrow, Tadd, E-mail: agitelson2@unl.ed [Center for Advanced Land Management Information Technologies (CALMIT), School of Natural Resources, University of Nebraska-Lincoln (United States)

    2009-10-15

    The objective of this work was to test the performance of a recently developed three-band model and its special case, a two-band model, for the remote estimation of the chlorophyll- a (chl-a) concentration in turbid productive case 2 waters. We specifically focused on (a) determining the ability of the models to estimate chl- a <20 mg m{sup -3}, typical for coastal and estuarine waters, and (b) assessing the potential of MODIS and MERIS to estimate chl-a concentrations in turbid productive waters, using red and near-infrared (NIR) bands. Reflectance spectra and water samples were collected in 89 stations over lakes in the United States with a wide variability in optical parameters (i.e. 2.1estimate chlorophyll-a concentrations with a root mean square error (RMSE) of <1.65 mg m{sup -3}. MODIS (bands 13 and 15) and MERIS (bands 7, 9, and 10) red and NIR reflectances were simulated from the collected reflectance spectra and potential estimation errors were assessed. The MODIS two-band model is able to estimate chl- a concentrations with a RMSE of<7.5 mg m{sup -3} for chl-a ranging from 2 to 50 mg m{sup -3}; however, the model loses its sensitivity for chl- a<20 mg m{sup -3}. Benefiting from the higher spectral resolution of the MERIS data, the MERIS three-band model accounts for 93% of chl- a variation and is able to estimate chl-a concentrations with a RMSE of<5.1 mg m{sup -3} for chl-a ranging from 2 to 50 mg m{sup -3}, and a RMSE of<1.7 mg m{sup -3} for chl-a ranging from 2 to 20 mg m{sup -3}. These findings imply that, provided that an atmospheric correction scheme specific to the red and NIR spectral region is available, the extensive database of MODIS and MERIS images could be used to

  1. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over

  2. Regularization and error estimates for nonhomogeneous backward heat problems

    Directory of Open Access Journals (Sweden)

    Duc Trong Dang

    2006-01-01

    Full Text Available In this article, we study the inverse time problem for the non-homogeneous heat equation which is a severely ill-posed problem. We regularize this problem using the quasi-reversibility method and then obtain error estimates on the approximate solutions. Solutions are calculated by the contraction principle and shown in numerical experiments. We obtain also rates of convergence to the exact solution.

  3. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  4. Estimation of glucose kinetics in fetal-maternal studies: Potential errors, solutions, and limitations

    International Nuclear Information System (INIS)

    Menon, R.K.; Bloch, C.A.; Sperling, M.A.

    1990-01-01

    We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by [U-14C]glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. [U-14C]glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of [U-14C]glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant

  5. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    Science.gov (United States)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  6. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  7. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  8. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  9. Error estimation in the neural network solution of ordinary differential equations.

    Science.gov (United States)

    Filici, Cristian

    2010-06-01

    In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented. Copyright 2010. Published by Elsevier Ltd.

  10. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  11. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    Science.gov (United States)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-10-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  12. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    International Nuclear Information System (INIS)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-01-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SA REF ) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SA PSD ) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SA INV1 ) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SA INV2 ) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SA PSD was 0.7–1.8 times higher and SA INV1 and SA INV2 were 2.2–8 times higher than SA REF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SA REF . However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SA REF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SA PSD ) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  13. Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.

    Science.gov (United States)

    Besio, W; Aakula, R; Dai, W

    2004-01-01

    Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.

  14. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  15. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  16. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  17. An estimate and evaluation of design error effects on nuclear power plant design adequacy

    International Nuclear Information System (INIS)

    Stevenson, J.D.

    1984-01-01

    An area of considerable concern in evaluating Design Control Quality Assurance procedures applied to design and analysis of nuclear power plant is the level of design error expected or encountered. There is very little published data 1 on the level of error typically found in nuclear power plant design calculations and even less on the impact such errors would be expected to have on overall design adequacy of the plant. This paper is concerned with design error associated with civil and mechanical structural design and analysis found in calculations which form part of the Design or Stress reports. These reports are meant to document the design basis and adequacy of the plant. The estimates contained in this paper are based on the personal experiences of the author. In Table 1 is a partial listing of the design docummentation review performed by the author on which the observations contained in this paper are based. In the preparation of any design calculations, it is a utopian dream to presume such calculations can be made error free. The intent of this paper is to define error levels which might be expected in a competent engineering organizations employing currently technically qualified engineers and accepted methods of Design Control. In addition, the effects of these errors on the probability of failure to meet applicable design code requirements also are estimated

  18. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  19. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Science.gov (United States)

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  20. Error estimates for near-Real-Time Satellite Soil Moisture as Derived from the Land Parameter Retrieval Model

    NARCIS (Netherlands)

    Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.

    2011-01-01

    A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from

  1. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  2. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    Science.gov (United States)

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  3. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  4. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  5. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  6. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  7. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  8. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors

    International Nuclear Information System (INIS)

    Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R

    2010-01-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not

  9. Performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Roy, M.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.

    2013-01-01

    Highlights: ► Rapid analysis of heavy water samples, with precise temperature control. ► Entire composition range covered. ► Both variations in mole and wt.% of D 2 O in the heavy water sample studied. ► Standard error of calibration and prediction were estimated. - Abstract: The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of heavy water (D 2 O) in a simulated water sample. Feasibility of refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been amply demonstrated. Temperature of the samples has been precisely controlled to eliminate the effect of temperature fluctuation on refractive index measurement. The method is found to exhibit a reasonable analytical response to its calibration performance over the purity range of 0–100% D 2 O. An accuracy of below ±1% in the measurement of isotopic purity of heavy water for the entire range could be achieved

  10. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  11. Estimation of the wind turbine yaw error by support vector machines

    DEFF Research Database (Denmark)

    Sheibat-Othman, Nida; Othman, Sami; Tayari, Raoaa

    2015-01-01

    Wind turbine yaw error information is of high importance in controlling wind turbine power and structural load. Normally used wind vanes are imprecise. In this work, the estimation of yaw error in wind turbines is studied using support vector machines for regression (SVR). As the methodology...... is data-based, simulated data from a high fidelity aero-elastic model is used for learning. The model simulates a variable speed horizontal-axis wind turbine composed of three blades and a full converter. Both partial load (blade angles fixed at 0 deg) and full load zones (active pitch actuators...

  12. A new three-band algorithm for estimating chlorophyll concentrations in turbid inland lakes

    International Nuclear Information System (INIS)

    Duan Hongtao; Ma Ronghua; Zhao Chenlu; Zhou Lin; Shang Linlin; Zhang Yuanzhi; Loiselle, Steven Arthur; Xu Jingping

    2010-01-01

    A new three-band model was developed to estimate chlorophyll-a concentrations in turbid inland waters. This model makes a number of important improvements with respect to the three-band model commonly used, including lower restrictions on wavelength optimization and the use of coefficients which represent specific inherent optical properties. Results showed that the new model provides a significantly higher determination coefficient and lower root mean squared error (RMSE) with respect to the original model for upwelling data from Taihu Lake, China. The new model was tested using simulated data for the MERIS and GOCI satellite systems, showing high correlations with the former and poorer correlations with the latter, principally due to the lack of a 709 nm centered waveband. The new model provides numerous advantages, making it a suitable alternative for chlorophyll-a estimations in turbid and eutrophic waters.

  13. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  14. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    Science.gov (United States)

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  15. L∞-error estimate for a system of elliptic quasivariational inequalities

    Directory of Open Access Journals (Sweden)

    M. Boulbrachene

    2003-01-01

    Full Text Available We deal with the numerical analysis of a system of elliptic quasivariational inequalities (QVIs. Under W2,p(Ω-regularity of the continuous solution, a quasi-optimal L∞-convergence of a piecewise linear finite element method is established, involving a monotone algorithm of Bensoussan-Lions type and standard uniform error estimates known for elliptic variational inequalities (VIs.

  16. Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method

    Directory of Open Access Journals (Sweden)

    Li Husheng

    2005-01-01

    Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.

  17. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  18. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  19. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  20. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    International Nuclear Information System (INIS)

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-01-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  1. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    Energy Technology Data Exchange (ETDEWEB)

    Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  2. Pollutant forecasting error based on persistence of wind direction

    International Nuclear Information System (INIS)

    Cooper, R.E.

    1978-01-01

    The purpose of this report is to provide a means of estimating the reliability of forecasts of downwind pollutant concentrations from atmospheric puff releases. These forecasts are based on assuming the persistence of wind direction as determined at the time of release. This initial forecast will be used to deploy survey teams, to predict population centers that may be affected, and to estimate the amount of time available for emergency response. Reliability of forecasting is evaluated by developing a cumulative probability distribution of error as a function of lapsed time following an assumed release. The cumulative error is determined by comparing the forecast pollutant concentration with the concentration measured by sampling along the real-time meteorological trajectory. It may be concluded that the assumption of meteorological persistence for emergency response is not very good for periods longer than 3 hours. Even within this period, the possibiity for large error exists due to wind direction shifts. These shifts could affect population areas totally different from those areas first indicated

  3. A feasibility study of mutual information based setup error estimation for radiotherapy

    International Nuclear Information System (INIS)

    Kim, Jeongtae; Fessler, Jeffrey A.; Lam, Kwok L.; Balter, James M.; Haken, Randall K. ten

    2001-01-01

    We have investigated a fully automatic setup error estimation method that aligns DRRs (digitally reconstructed radiographs) from a three-dimensional planning computed tomography image onto two-dimensional radiographs that are acquired in a treatment room. We have chosen a MI (mutual information)-based image registration method, hoping for robustness to intensity differences between the DRRs and the radiographs. The MI-based estimator is fully automatic since it is based on the image intensity values without segmentation. Using 10 repeated scans of an anthropomorphic chest phantom in one position and two single scans in two different positions, we evaluated the performance of the proposed method and a correlation-based method against the setup error determined by fiducial marker-based method. The mean differences between the proposed method and the fiducial marker-based method were smaller than 1 mm for translational parameters and 0.8 degree for rotational parameters. The standard deviations of estimates from the proposed method due to detector noise were smaller than 0.3 mm and 0.07 degree for the translational parameters and rotational parameters, respectively

  4. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  5. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  6. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    Directory of Open Access Journals (Sweden)

    Joaquin Ballesteros

    2016-11-01

    Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  7. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    Science.gov (United States)

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  8. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    Science.gov (United States)

    Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.

    2017-01-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.

  9. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  10. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  11. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    Science.gov (United States)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  12. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  13. Estimation of error in using born scaling for collision cross sections involving muonic ions

    International Nuclear Information System (INIS)

    Stodden, C.D.; Monkhorst, H.J.; Szalewicz, K.

    1988-01-01

    A quantitative estimate is obtained for the error involved in using Born scaling to calcuated excitation and ionization cross sections for collisions between muonic ions. The impact parameter version of the Born Approximation is used to calculate cross sections and Coulomb corrections for the 1s→2s excitation of αμ in collisions with d. An error of about 50% is found around the peak of the cross section curve. The error falls to less than 5% for velocities above 2 a.u

  14. Approximate damped oscillatory solutions and error estimates for the perturbed Klein–Gordon equation

    International Nuclear Information System (INIS)

    Ye, Caier; Zhang, Weiguo

    2015-01-01

    Highlights: • Analyze the dynamical behavior of the planar dynamical system corresponding to the perturbed Klein–Gordon equation. • Present the relations between the properties of traveling wave solutions and the perturbation coefficient. • Obtain all explicit expressions of approximate damped oscillatory solutions. • Investigate error estimates between exact damped oscillatory solutions and the approximate solutions and give some numerical simulations. - Abstract: The influence of perturbation on traveling wave solutions of the perturbed Klein–Gordon equation is studied by applying the bifurcation method and qualitative theory of dynamical systems. All possible approximate damped oscillatory solutions for this equation are obtained by using undetermined coefficient method. Error estimates indicate that the approximate solutions are meaningful. The results of numerical simulations also establish our analysis

  15. The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...

  16. Full information estimations of a system of simultaneous equations with error component structure

    OpenAIRE

    Balestra, Pietro; Krishnakumar, Jaya

    1987-01-01

    In this paper we develop full information methods for estimating the parameters of a system of simultaneous equations with error component struc-ture and establish relationships between the various structural estimat

  17. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  18. Estimation of chlorophyll-a concentration in estuarine waters: case study of the Pearl River estuary, South China Sea

    International Nuclear Information System (INIS)

    Zhang Yuanzhi; Lin Hui; Chen, Chuqun; Chen Liding; Zhang Bing; Gitelson, Anatoly A

    2011-01-01

    The objective of this work is to estimate chlorophyll-a (chl-a) concentration in the Pearl River estuary in China. To test the performance of algorithms for the estimation of the chl-a concentration in these productive turbid waters, the maximum band ratio (MBR) and near-infrared-red (NIR-red) models are used in this study. Specific focus is placed on (a) comparing the ability of the models to estimate chl-a in the range 1-12 mg m -3 , which is typical for coastal and estuarine waters, and (b) assessing the potential of the Moderate Resolution Imaging Spectrometer (MODIS) and Medium Resolution Imaging Spectrometer (MERIS) to estimate chl-a concentrations. Reflectance spectra and water samples were collected at 13 stations with chl-a ranging from 0.83 to 11.8 mg m -3 and total suspended matter from 9.9 to 21.5 g m -3 . A close relationship was found between chl-a concentration and total suspended matter concentration with the determining coefficient (R 2 ) above 0.89. The MBR calculated in the spectral bands of MODIS proved to be a good proxy for chl-a concentration (R 2 > 0.93). On the other hand, both the NIR-red three-band model, with wavebands around 665, 700, and 730 nm, and the NIR-red two-band model (with bands around 665 and 700 nm) explained more than 95% of the chl-a variation, and we were able to estimate chl-a concentrations with a root mean square error below 1 mg m -3 . The two- and three-band NIR-red models with MERIS spectral bands accounted for 93% of the chl-a variation. These findings imply that the extensive database of MODIS and MERIS images could be used to quantitatively monitor chl-a in the Pearl River estuary.

  19. Estimation of chlorophyll-a concentration in estuarine waters: case study of the Pearl River estuary, South China Sea

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuanzhi; Lin Hui [Institute of Space and Earth Information Science, Yuen Yuen Research Centre for Satellite Remote Sensing, Chinese University of Hong Kong, Shatin, N.T. (Hong Kong); Chen, Chuqun [South China Institute of Oceanology, Chinese Academy of Sciences, Guangzhou (China); Chen Liding [Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing (China); Zhang Bing [Center for Earth Observation and Digital Earth, Chinese Academy of Sciences, Beijing (China); Gitelson, Anatoly A, E-mail: yuanzhizhang@cuhk.edu.hk [Center for Advanced Land Management Information Technologies (CALMIT), School of Natural Resources, University of Nebraska-Lincoln (United States)

    2011-04-15

    The objective of this work is to estimate chlorophyll-a (chl-a) concentration in the Pearl River estuary in China. To test the performance of algorithms for the estimation of the chl-a concentration in these productive turbid waters, the maximum band ratio (MBR) and near-infrared-red (NIR-red) models are used in this study. Specific focus is placed on (a) comparing the ability of the models to estimate chl-a in the range 1-12 mg m{sup -3}, which is typical for coastal and estuarine waters, and (b) assessing the potential of the Moderate Resolution Imaging Spectrometer (MODIS) and Medium Resolution Imaging Spectrometer (MERIS) to estimate chl-a concentrations. Reflectance spectra and water samples were collected at 13 stations with chl-a ranging from 0.83 to 11.8 mg m{sup -3} and total suspended matter from 9.9 to 21.5 g m{sup -3}. A close relationship was found between chl-a concentration and total suspended matter concentration with the determining coefficient (R{sup 2}) above 0.89. The MBR calculated in the spectral bands of MODIS proved to be a good proxy for chl-a concentration (R{sup 2} > 0.93). On the other hand, both the NIR-red three-band model, with wavebands around 665, 700, and 730 nm, and the NIR-red two-band model (with bands around 665 and 700 nm) explained more than 95% of the chl-a variation, and we were able to estimate chl-a concentrations with a root mean square error below 1 mg m{sup -3}. The two- and three-band NIR-red models with MERIS spectral bands accounted for 93% of the chl-a variation. These findings imply that the extensive database of MODIS and MERIS images could be used to quantitatively monitor chl-a in the Pearl River estuary.

  20. Characterizing Air Pollution Exposure Misclassification Errors Using Detailed Cell Phone Location Data

    Science.gov (United States)

    Yu, H.; Russell, A. G.; Mulholland, J. A.

    2017-12-01

    In air pollution epidemiologic studies with spatially resolved air pollution data, exposures are often estimated using the home locations of individual subjects. Due primarily to lack of data or logistic difficulties, the spatiotemporal mobility of subjects are mostly neglected, which are expected to result in exposure misclassification errors. In this study, we applied detailed cell phone location data to characterize potential exposure misclassification errors associated with home-based exposure estimation of air pollution. The cell phone data sample consists of 9,886 unique simcard IDs collected on one mid-week day in October, 2013 from Shenzhen, China. The Community Multi-scale Air Quality model was used to simulate hourly ambient concentrations of six chosen pollutants at 3 km spatial resolution, which were then fused with observational data to correct for potential modeling biases and errors. Air pollution exposure for each simcard ID was estimated by matching hourly pollutant concentrations with detailed location data for corresponding IDs. Finally, the results were compared with exposure estimates obtained using the home location method to assess potential exposure misclassification errors. Our results show that the home-based method is likely to have substantial exposure misclassification errors, over-estimating exposures for subjects with higher exposure levels and under-estimating exposures for those with lower exposure levels. This has the potential to lead to a bias-to-the-null in the health effect estimates. Our findings suggest that the use of cell phone data has the potential for improving the characterization of exposure and exposure misclassification in air pollution epidemiology studies.

  1. Nitrogen concentration estimation with hyperspectral LiDAR

    Directory of Open Access Journals (Sweden)

    O. Nevalainen

    2013-10-01

    Full Text Available Agricultural lands have strong impact on global carbon dynamics and nitrogen availability. Monitoring changes in agricultural lands require more efficient and accurate methods. The first prototype of a full waveform hyperspectral Light Detection and Ranging (LiDAR instrument has been developed at the Finnish Geodetic Institute (FGI. The instrument efficiently combines the benefits of passive and active remote sensing sensors. It is able to produce 3D point clouds with spectral information included for every point which offers great potential in the field of remote sensing of environment. This study investigates the performance of the hyperspectral LiDAR instrument in nitrogen estimation. The investigation was conducted by finding vegetation indices sensitive to nitrogen concentration using hyperspectral LiDAR data and validating their performance in nitrogen estimation. The nitrogen estimation was performed by calculating 28 published vegetation indices to ten oat samples grown in different fertilization conditions. Reference data was acquired by laboratory nitrogen concentration analysis. The performance of the indices in nitrogen estimation was determined by linear regression and leave-one-out cross-validation. The results indicate that the hyperspectral LiDAR instrument holds a good capability to estimate plant biochemical parameters such as nitrogen concentration. The instrument holds much potential in various environmental applications and provides a significant improvement to the remote sensing of environment.

  2. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  3. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  4. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  5. Unified theory to evaluate the effect of concentration difference and Peclet number on electroosmotic mobility error of micro electroosmotic flow

    KAUST Repository

    Wang, Wentao

    2012-03-01

    Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.

  6. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  8. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  9. Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller

    2014-01-01

    This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...

  10. Analytical errors in measuring radioactivity in cell proteins and their effect on estimates of protein turnover in L cells

    International Nuclear Information System (INIS)

    Silverman, J.A.; Mehta, J.; Brocher, S.; Amenta, J.S.

    1985-01-01

    Previous studies on protein turnover in 3 H-labelled L-cell cultures have shown recovery of total 3 H at the end of a three-day experiment to be always significantly in excess of the 3 H recovered at the beginning of the experiment. A number of possible sources for this error in measuring radioactivity in cell proteins has been reviewed. 3 H-labelled proteins, when dissolved in NaOH and counted for radioactivity in a liquid-scintillation spectrometer, showed losses of 30-40% of the radioactivity; neither external or internal standardization compensated for this loss. Hydrolysis of these proteins with either Pronase or concentrated HCl significantly increased the measured radioactivity. In addition, 5-10% of the cell protein is left on the plastic culture dish when cells are recovered in phosphate-buffered saline. Furthermore, this surface-adherent protein, after pulse labelling, contains proteins of high radioactivity that turn over rapidly and make a major contribution to the accumulating radioactivity in the medium. These combined errors can account for up to 60% of the total radioactivity in the cell culture. Similar analytical errors have been found in studies of other cell cultures. The effect of these analytical errors on estimates of protein turnover in cell cultures is discussed. (author)

  11. mBEEF-vdW: Robust fitting of error estimation density functionals

    DEFF Research Database (Denmark)

    Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes

    2016-01-01

    . The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...

  12. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  13. Variational Multiscale error estimator for anisotropic adaptive fluid mechanic simulations: application to convection-diffusion problems

    OpenAIRE

    Bazile , Alban; Hachem , Elie; Larroya-Huguet , Juan-Carlos; Mesri , Youssef

    2018-01-01

    International audience; In this work, we present a new a posteriori error estimator based on the Variational Multiscale method for anisotropic adaptive fluid mechanics problems. The general idea is to combine the large scale error based on the solved part of the solution with the sub-mesh scale error based on the unresolved part of the solution. We compute the latter with two different methods: one using the stabilizing parameters and the other using bubble functions. We propose two different...

  14. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  15. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    Science.gov (United States)

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  16. Estimating the concentration of urea and creatinine in the human serum of normal and dialysis patients through Raman spectroscopy.

    Science.gov (United States)

    de Almeida, Maurício Liberal; Saatkamp, Cassiano Junior; Fernandes, Adriana Barrinha; Pinheiro, Antonio Luiz Barbosa; Silveira, Landulfo

    2016-09-01

    Urea and creatinine are commonly used as biomarkers of renal function. Abnormal concentrations of these biomarkers are indicative of pathological processes such as renal failure. This study aimed to develop a model based on Raman spectroscopy to estimate the concentration values of urea and creatinine in human serum. Blood sera from 55 clinically normal subjects and 47 patients with chronic kidney disease undergoing dialysis were collected, and concentrations of urea and creatinine were determined by spectrophotometric methods. A Raman spectrum was obtained with a high-resolution dispersive Raman spectrometer (830 nm). A spectral model was developed based on partial least squares (PLS), where the concentrations of urea and creatinine were correlated with the Raman features. Principal components analysis (PCA) was used to discriminate dialysis patients from normal subjects. The PLS model showed r = 0.97 and r = 0.93 for urea and creatinine, respectively. The root mean square errors of cross-validation (RMSECV) for the model were 17.6 and 1.94 mg/dL, respectively. PCA showed high discrimination between dialysis and normality (95 % accuracy). The Raman technique was able to determine the concentrations with low error and to discriminate dialysis from normal subjects, consistent with a rapid and low-cost test.

  17. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  18. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  19. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  20. Feasibility of using acoustic velocity meters for estimating highly organic suspended-solids concentrations in streams

    Science.gov (United States)

    Patino, Eduardo

    1996-01-01

    A field experiment was conducted at the Levee 4 canal site below control structure G-88 in the Everglades agricultural area in northwestern Broward County, Florida, to study the relation of acoustic attenuation to suspended-solids concentrations. Acoustic velocity meter and temperature data were obtained with concurrent water samples analyzed for suspended-solids concentrations. Two separate acoustic velocity meter frequencies were used, 200 and 500 kilohertz, to determine the sensitivity of acoustic attenuation to frequency for the measured suspended-solids concentration range. Suspended-solids concentrations for water samples collected at the Levee 4 canal site from July 1993 to September 1994 ranged from 22 to 1,058 milligrams per liter, and organic content ranged from about 30 to 93 percent. Regression analyses showed that attenuation data from the acoustic velocity meter (automatic gain control) and temperature data alone do not provide enough information to adequately describe the concentrations of suspended solids. However, if velocity is also included as one of the independent variables in the regression model, a satisfactory correlation can be obtained. Thus, it is feasible to use acoustic velocity meter instrumentation to estimate suspended-solids concentrations in streams, even when suspended solids are primarily composed of organic material. Using the most comprehensive data set available for the study (500 kiloherz data), the best fit regression model produces a standard error of 69.7 milligrams per liter, with actual errors ranging from 2 to 128 milligrams per liter. Both acoustic velocity meter transmission frequencies of 200 and 500 hilohertz produced similar results, suggesting that transducers of either frequency could be used to collect attenuation data at the study site. Results indicate that calibration will be required for each acoustic velocity meter system to the unique suspended-solids regime existing at each site. More robust solutions may

  1. An information-guided channel-hopping scheme for block-fading channels with estimation errors

    KAUST Repository

    Yang, Yuli

    2010-12-01

    Information-guided channel-hopping technique employing multiple transmit antennas was previously proposed for supporting high data rate transmission over fading channels. This scheme achieves higher data rates than some mature schemes, such as the well-known cyclic transmit antenna selection and space-time block coding, by exploiting the independence character of multiple channels, which effectively results in having an additional information transmitting channel. Moreover, maximum likelihood decoding may be performed by simply decoupling the signals conveyed by the different mapping methods. In this paper, we investigate the achievable spectral efficiency of this scheme in the case of having channel estimation errors, with optimum pilot overhead for minimum meansquare error channel estimation, when transmitting over blockfading channels. Our numerical results further substantiate the robustness of the presented scheme, even with imperfect channel state information. ©2010 IEEE.

  2. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    Science.gov (United States)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  3. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  4. Estimating and comparing microbial diversity in the presence of sequencing errors

    Science.gov (United States)

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This

  5. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  6. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  7. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  8. Application of the error propagation theory in estimates of static formation temperatures in geothermal and petroleum boreholes

    International Nuclear Information System (INIS)

    Verma, Surendra P.; Andaverde, Jorge; Santoyo, E.

    2006-01-01

    We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with

  9. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  10. Error estimation for CFD aeroheating prediction under rarefied flow condition

    Science.gov (United States)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  11. Performance Analysis of Amplify-and-Forward Two-Way Relaying with Co-Channel Interference and Channel Estimation Error

    KAUST Repository

    Liang Yang,

    2013-06-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we first consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation errors and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use them to evaluate the outage probability, error probability, and achievable rate. Subsequently, to investigate the joint effects of the channel estimation error and CCI on the system performance, we extend our analysis to a multiple-relay network and derive several asymptotic performance expressions. For comparison purposes, we also provide the analysis for the relay selection scheme under the total power constraint at the relays. For AF TWRN with channel estimation error and CCI, numerical results show that the performance of the relay selection scheme is not always better than that of the all-relay participating case. In particular, the relay selection scheme can improve the system performance in the case of high power levels at the sources and small powers at the relays.

  12. Estimation of chlorophyll-a concentration in productive turbid waters using a Hyperspectral Imager for the Coastal Ocean-the Azov Sea case study

    International Nuclear Information System (INIS)

    Gitelson, Anatoly A; Gao Bocai; Li Rongrong; Berdnikov, Sergey; Saprygin, Vladislav

    2011-01-01

    We present here the results of chlorophyll-a (chl-a) concentration estimation using the red and near infrared (NIR) spectral bands of a Hyperspectral Imager for the Coastal Ocean (HICO) in productive turbid waters of the Azov Sea, Russia. During the data collection campaign in the summer of 2010 in Taganrog Bay and the Azov Sea, water samples were collected and concentrations of chl-a were measured analytically. The NIR-red models were tuned to optimize the spectral band selections and chl-a concentrations were retrieved from HICO data. The NIR-red three-band model with HICO-retrieved reflectances at wavelengths 684, 700, and 720 nm explained more than 85% of chl-a concentration variation in the range from 19.67 to 93.14 mg m -3 and was able to estimate chl-a with root mean square error below 10 mg m -3 . The results indicate the high potential of HICO data to estimate chl-a concentration in turbid productive (Case II) waters in real-time, which will be of immense value to scientists, natural resource managers, and decision makers involved in managing the inland and coastal aquatic ecosystems.

  13. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Directory of Open Access Journals (Sweden)

    Marco A

    2006-01-01

    Full Text Available Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.. In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS, even when nearly half the measures suffered from NLOS or other coarse errors.

  14. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Science.gov (United States)

    Casas, R.; Marco, A.; Guerrero, J. J.; Falcó, J.

    2006-12-01

    Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

  15. Transposing Concentration-Discharge Curves onto Unmonitored Catchments to Estimate Seasonal Nutrient Loads

    Science.gov (United States)

    Minaudo, C.; Moatar, F.; Abbott, B. W.; Dupas, R.; Gascuel-Odoux, C.; Pinay, G.; Roubeix, V.; Danis, P. A.

    2017-12-01

    Many lakes and reservoirs in Europe suffer from severe eutrophication. Accurate quantification of nutrient loads are critical for effective mitigation measures, but this information is often unknown. For example, in France, only 50 out of 481 lakes and reservoirs have national monitoring allowing estimation of interannual nitrogen and phosphorus loads, and even these loads are computed from low-frequency data. To address this lack of data, we developed a straightforward method to predict seasonal loads in lake tributaries. First, we analyzed concentration-discharge (C-Q) curves in monitored catchments and identified slopes, intercepts, and coefficient of variation of the log(C)-log(Q) regressions determined for both low and high flows, separated by the median daily flow [Moatar et al., 2017]. Then, we used stepwise multiple linear regression models to empirically link the characteristics of C-Q curves with a set of catchment descriptors such as land use, lithology, morphology indices, climate, and hydrological indicators. Modeled C-Q relationships were then used to estimate annual and seasonal nutrient loads in nearby and similar unmonitored catchments. We implemented this approach on a large dataset from France where stream flow was surveyed daily and water quality (suspended solids, nitrate, total phosphorus, and orthophosphate concentrations) was measured on a monthly basis at 233 stations over the past 20 years in catchments from 10 to 3000 km². The concentration at the median daily flow (seen here as a metric of the general level of contamination in a catchment) was predicted with uncertainty ranging between 30 and 100 %, depending on the variable. C-Q slopes were predicted with large errors, but a sensitivity analysis was conducted to determine the impact of C-Q slopes uncertainties on computed annual and seasonal loads. This approach allows estimation of seasonal and annual nutrient loads and could be potentially implemented to improve protection and

  16. Error Estimation in Preconditioned Conjugate Gradients

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2005-01-01

    Roč. 45, - (2005), s. 789-817 ISSN 0006-3835 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR KJB1030306 Institutional research plan: CEZ:AV0Z10300504 Keywords : preconditioned conjugate gradient method * error bounds * stopping criteria * evaluation of convergence * numerical stability * finite precision arithmetic * rounding errors Subject RIV: BA - General Mathematics Impact factor: 0.509, year: 2005

  17. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  18. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  19. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  20. On Estimation of the A-norm of the Error in CG and PCG

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2003-01-01

    Roč. 3, - (2003), s. 553-554 ISSN 1617-7061. [GAMM. Padua, 24.03.2003-28.03.2003] R&D Projects: GA ČR GA201/02/0595 Institutional research plan: CEZ:AV0Z1030915 Keywords : preconditioned conjugate gradient * error estimates * stopping criteria Subject RIV: BA - General Mathematics

  1. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  2. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  3. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  4. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  5. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  6. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    Science.gov (United States)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  7. Inversion of In Situ Light Absorption and Attenuation Measurements to Estimate Constituent Concentrations in Optically Complex Shelf Seas

    Science.gov (United States)

    Ramírez-Pérez, M.; Twardowski, M.; Trees, C.; Piera, J.; McKee, D.

    2018-01-01

    A deconvolution approach is presented to use spectral light absorption and attenuation data to estimate the concentration of the major nonwater compounds in complex shelf sea waters. The inversion procedure requires knowledge of local material-specific inherent optical properties (SIOPs) which are determined from natural samples using a bio-optical model that differentiates between Case I and Case II waters and uses least squares linear regression analysis to provide optimal SIOP values. A synthetic data set is used to demonstrate that the approach is fundamentally consistent and to test the sensitivity to injection of controlled levels of artificial noise into the input data. Self-consistency of the approach is further demonstrated by application to field data collected in the Ligurian Sea, with chlorophyll (Chl), the nonbiogenic component of total suspended solids (TSSnd), and colored dissolved organic material (CDOM) retrieved with RMSE of 0.61 mg m-3, 0.35 g m-3, and 0.02 m-1, respectively. The utility of the approach is finally demonstrated by application to depth profiles of in situ absorption and attenuation data resulting in profiles of optically significant constituents with associated error bar estimates. The advantages of this procedure lie in the simple input requirements, the avoidance of error amplification, full exploitation of the available spectral information from both absorption and attenuation channels, and the reasonably successful retrieval of constituent concentrations in an optically complex shelf sea.

  8. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results......The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO......-CONS-GCF-2-TIM-R4 and the new Centre National d'Etudes Spatiales mean sea surface model MSS_CNES_CLS_11 from satellite altimetry. However, errors and uncertainties of MDT and geostrophic current estimates from satellite observations are not generally quantified. In this paper, errors and uncertainties of MDT...

  9. Food photographs in nutritional surveillance: errors in portion size estimation using drawings of bread and photographs of margarine and beverages consumption.

    Science.gov (United States)

    De Keyzer, Willem; Huybrechts, Inge; De Maeyer, Mieke; Ocké, Marga; Slimani, Nadia; van 't Veer, Pieter; De Henauw, Stefaan

    2011-04-01

    Food photographs are widely used as instruments to estimate portion sizes of consumed foods. Several food atlases are available, all developed to be used in a specific context and for a given study population. Frequently, food photographs are adopted for use in other studies with a different context or another study population. In the present study, errors in portion size estimation of bread, margarine on bread and beverages by two-dimensional models used in the context of a Belgian food consumption survey are investigated. A sample of 111 men and women (age 45-65 years) were invited for breakfast; two test groups were created. One group was asked to estimate portion sizes of consumed foods using photographs 1-2 d after consumption, and a second group was asked the same after 4 d. Also, real-time assessment of portion sizes using photographs was performed. At the group level, large overestimation of margarine, acceptable underestimation of bread and only small estimation errors for beverages were found. Women tended to have smaller estimation errors for bread and margarine compared with men, while the opposite was found for beverages. Surprisingly, no major difference in estimation error was found after 4 d compared with 1-2 d. Individual estimation errors were large for all foods. The results from the present study suggest that the use of food photographs for portion size estimation of bread and beverages is acceptable for use in nutrition surveys. For photographs of margarine on bread, further validation using smaller amounts corresponding to actual consumption is recommended.

  10. Regression model development and computational procedures to support estimation of real-time concentrations and loads of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-9

    Science.gov (United States)

    Lee, Michael T.; Asquith, William H.; Oden, Timothy D.

    2012-01-01

    In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged

  11. Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning

    Science.gov (United States)

    Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.

    2017-12-01

    Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the cloud contamination still is one of the causes leading to negative SST residuals. Precision and accuracy of error estimates from our decision tree

  12. L∞-error estimates of a finite element method for the Hamilton-Jacobi-Bellman equations

    International Nuclear Information System (INIS)

    Bouldbrachene, M.

    1994-11-01

    We study the finite element approximation for the solution of the Hamilton-Jacobi-Bellman equations involving a system of quasi-variational inequalities (QVI). We also give the optimal L ∞ -error estimates, using the concepts of subsolutions and discrete regularity. (author). 7 refs

  13. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  14. Estimating oil product demand in Indonesia using a cointegrating error correction model

    International Nuclear Information System (INIS)

    Dahl, C.

    2001-01-01

    Indonesia's long oil production history and large population mean that Indonesian oil reserves, per capita, are the lowest in OPEC and that, eventually, Indonesia will become a net oil importer. Policy-makers want to forestall this day, since oil revenue comprised around a quarter of both the government budget and foreign exchange revenues for the fiscal years 1997/98. To help policy-makers determine how economic growth and oil-pricing policy affect the consumption of oil products, we estimate the demand for six oil products and total petroleum consumption, using an error correction-cointegration approach, and compare it with estimates on a lagged endogenous model using data for 1970-95. (author)

  15. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  16. Modeling of the effect of tool wear per discharge estimation error on the depth of machined cavities in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    2017-01-01

    In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...... performed on a micro-EDM machine. Simulations and experimental results were found to be in good agreement, showing the effect of errror amplification through the cavity depth....

  17. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  18. The Euler equation with habits and measurement errors: Estimates on Russian micro data

    Directory of Open Access Journals (Sweden)

    Khvostova Irina

    2016-01-01

    Full Text Available This paper presents estimates of the consumption Euler equation for Russia. The estimation is based on micro-level panel data and accounts for the heterogeneity of agents’ preferences and measurement errors. The presence of multiplicative habits is checked using the Lagrange multiplier (LM test in a generalized method of moments (GMM framework. We obtain estimates of the elasticity of intertemporal substitution and of the subjective discount factor, which are consistent with the theoretical model and can be used for the calibration and the Bayesian estimation of dynamic stochastic general equilibrium (DSGE models for the Russian economy. We also show that the effects of habit formation are not significant. The hypotheses of multiplicative habits (external, internal, and both external and internal are not supported by the data.

  19. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi

    2013-01-01

    initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally

  20. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China.

    Science.gov (United States)

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R(2) = 0.86-0.93 for 72 data sets collected in the industrial river and R(2) = 0.60-0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals' concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density.

  1. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    Science.gov (United States)

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  2. CTER-rapid estimation of CTF parameters with error assessment.

    Science.gov (United States)

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A method for the estimation of the residual error in the SALP approach for fault tree analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Contini, S.

    1980-01-01

    The aim of this report is the illustration of the algorithms implemented in the SALP-MP code for the estimation of the residual error. These algorithms are of more general use, and it would be possible to implement them on all codes of the series SALP previously developed, as well as, with minor modifications, to analysis procedures based on 'top-down' approaches. At the time, combined 'top-down' - 'bottom up' procedures are being studied in order to take advantage from both approaches for further reduction of computer time and better estimation of the residual error, for which the developed algorithms are still applicable

  4. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  5. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    Science.gov (United States)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  6. Verification of functional a posteriori error estimates for obstacle problem in 1D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2013-01-01

    Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf

  7. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  8. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  9. Measurement error potential and control when quantifying volatile hydrocarbon concentrations in soils

    International Nuclear Information System (INIS)

    Siegrist, R.L.

    1991-01-01

    Due to their widespread use throughout commerce and industry, volatile hydrocarbons such as toluene, trichloroethene, and 1, 1,1-trichloroethane routinely appears as principal pollutants in contamination of soil system hydrocarbons is necessary to confirm the presence of contamination and its nature and extent; to assess site risks and the need for cleanup; to evaluate remedial technologies; and to verify the performance of a selected alternative. Decisions regarding these issues have far-reaching impacts and, ideally, should be based on accurate measurements of soil hydrocarbon concentrations. Unfortunately, quantification of volatile hydrocarbons in soils is extremely difficult and there is normally little understanding of the accuracy and precision of these measurements. Rather, the assumptions often implicitly made that the hydrocarbon data are sufficiently accurate for the intended purpose. This appear presents a discussion of measurement error potential when quantifying volatile hydrocarbons in soils, and outlines some methods for understanding the managing these errors

  10. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  11. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  12. Estimating cancer risk from outdoor concentrations of hazardous air pollutants in 1990

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, T.J.; Caldwell, J.; Cogliano, V.J.; Axelrad, D.A.

    2000-03-01

    A public health concern regarding hazardous air pollutants (HAPs) is their potential to cause cancer. It has been difficult to assess potential cancer risks from HAPs, due primarily to lack of ambient concentration data for the general population. The Environmental Protection Agency's Cumulative Exposure Project modeled 1990 outdoor concentrations of HAPs across the United States, which were combined with inhalation unit risk estimates to estimate the potential increase in excess cancer risk for individual carcinogenic HAPs. These were summed3d to provide an estimate of cancer risk from multiple HAPs. The analysis estimates a median excess cancer risk of 18 lifetime cancer cases per 100,000 people for all HAP concentrations. About 75% of estimated cancer risk was attributable to exposure to polycyclic organic matter, 1,3-butadiene, formaldehyde, benzene, and chromium. Consideration of some specific uncertainties, including underestimation of ambient concentrations, combining upper 95% confidence bound potency estimates, and changes to potency estimates, found that cancer risk may be underestimated by 15% or overestimated by 40--50%. Other unanalyzed uncertainties could make these under- or overestimates larger. This analysis used 1990 estimates of concentrations and can be used to track progress toward reducing cancer risk to the general population.

  13. Robust experiment design for estimating myocardial β adrenergic receptor concentration using PET

    International Nuclear Information System (INIS)

    Salinas, Cristian; Muzic, Raymond F. Jr.; Ernsberger, Paul; Saidel, Gerald M.

    2007-01-01

    Myocardial β adrenergic receptor (β-AR) concentration can substantially decrease in congestive heart failure and significantly increase in chronic volume overload, such as in severe aortic valve regurgitation. Positron emission tomography (PET) with an appropriate ligand-receptor model can be used for noninvasive estimation of myocardial β-AR concentration in vivo. An optimal design of the experiment protocol, however, is needed for sufficiently precise estimates of β-AR concentration in a heterogeneous population. Standard methods of optimal design do not account for a heterogeneous population with a wide range of β-AR concentrations and other physiological parameters and consequently are inadequate. To address this, we have developed a methodology to design a robust two-injection protocol that provides reliable estimates of myocardial β-AR concentration in normal and pathologic states. A two-injection protocol of the high affinity β-AR antagonist [ 18 F]-(S)-fluorocarazolol was designed based on a computer-generated (or synthetic) population incorporating a wide range of β-AR concentrations. Timing and dosage of the ligand injections were optimally designed with minimax criterion to provide the least bad β-AR estimates for the worst case in the synthetic population. This robust experiment design for PET was applied to experiments with pigs before and after β-AR upregulation by chemical sympathectomy. Estimates of β-AR concentration were found by minimizing the difference between the model-predicted and experimental PET data. With this robust protocol, estimates of β-AR concentration showed high precision in both normal and pathologic states. The increase in β-AR concentration after sympathectomy predicted noninvasively with PET is consistent with the increase shown by in vitro assays in pig myocardium. A robust experiment protocol was designed for PET that yields reliable estimates of β-AR concentration in a population with normal and pathologic

  14. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  15. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  16. Verification of functional a posteriori error estimates for obstacle problem in 2D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2014-01-01

    Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf

  17. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  18. An error bound estimate and convergence of the Nodal-LTS N solution in a rectangle

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Tullio de Vilhena, Marco

    2005-01-01

    In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS N solution in a rectangle. For such we present an efficient algorithm, called LTS N 2D-Diag solution for Cartesian geometry

  19. Estimation of Toxicity Equivalent Concentration (TEQ) of ...

    African Journals Online (AJOL)

    Estimation of Toxicity Equivalent Concentration (TEQ) of carcinogenic polycyclic aromatic hydrocarbons in soils from Idu Ekpeye playground and University of Port ... Effective soil remediation and detoxification method like Dispersion by chemical reaction technology should be deployed to clean-up sites to avoid soil toxicity ...

  20. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    Science.gov (United States)

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  1. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  2. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation

    OpenAIRE

    Alexandre Bryan Heinemann; Pepijn A.J. van Oort; Diogo Simões Fernandes; Aline de Holanda Nunes Maia

    2012-01-01

    Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs) data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, ...

  3. Metrological assessment of TDR performance for measurement of potassium concentration in soil solution

    Directory of Open Access Journals (Sweden)

    Isaac de M. Ponciano

    2016-04-01

    Full Text Available ABSTRACT Despite the growing use of the time domain reflectometry (TDR technique to monitoring ions in the soil solution, there are few studies that provide insight into measurement error. To overcome this lack of information, a methodology, based on the central limit theorem error, was used to quantify the uncertainty associated with using the technique to estimate potassium ion concentration in two soil types. Mathematical models based on electrical conductivity and soil moisture derived from TDR readings were used to estimate potassium concentration, and the results were compared to potassium concentration determined by flame spectrophotometry. It was possible to correct for random and systematic errors associated with TDR readings, significantly increasing the accuracy of the potassium estimation methodology. However, a single TDR reading can lead to an error of up to ± 18.84 mg L-1 K+ in soil solution (0 to 3 dS m-1, with a 95.42% degree of confidence, for a loamy sand soil; and an error of up to ± 12.50 mg L-1 of K+ (0 to 2.5 dS m-1 in soil solution, with a 95.06% degree of confidence, for a sandy clay soil.

  4. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  5. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    Science.gov (United States)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  6. Methods for estimating on-site ambient air concentrations at disposal sites

    International Nuclear Information System (INIS)

    Hwang, S.T.

    1987-01-01

    Currently, Gaussian type dispersion modeling and point source approximation are combined to estimate the ambient air concentrations of pollutants dispersed downwind of an areawide emission source, using the approach of virtual point source approximation. This Gaussian dispersion modeling becomes less accurate as the receptor comes closer to the source, and becomes inapplicable for the estimation of on-site ambient air concentrations at disposal sites. Partial differential equations are solved with appropriate boundary conditions for use in estimating the on-site concentrations in the ambient air impacted by emissions from an area source such as land disposal sites. Two variations of solution techniques are presented, and their predictions are compared

  7. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  8. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  9. Evaluation of the sources of error in the linepack estimation of a natural gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Marco, Fabio Capelassi Gavazzi de [Transportadora Brasileira Gasoduto Bolivia-Brasil S.A. (TBG), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The intent of this work is to explore the behavior of the random error associated with determination of linepack in a complex natural gas pipeline based on the effect introduced by the uncertainty of the different variables involved. There are many parameters involved in the determination of the gas inventory in a transmission pipeline: geometrical (diameter, length and elevation profile), operational (pressure, temperature and gas composition), environmental (ambient / ground temperature) and those dependent on the modeling assumptions (compressibility factor and heat transfer coefficient). Due to the extent of a natural gas pipeline and the vast amount of sensor involved it is infeasible to determine analytically the magnitude of resulting uncertainty in the linepack, thus this problem has been addressed using Monte Carlo Method. The approach consists of introducing random errors in the values of pressure, temperature and gas gravity that are employed in the determination of the linepack and verify its impact. Additionally, the errors associated with three different modeling assumptions to estimate the linepack are explored. The results reveal that pressure is the most critical variable while the temperature is the less critical. In regard to the different methods to estimate the linepack, deviations around 1.6% were verified among the methods. (author)

  10. Capacity estimation and verification of quantum channels with arbitrarily correlated errors.

    Science.gov (United States)

    Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie

    2018-01-02

    The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.

  11. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    Science.gov (United States)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  12. A machine learning method to estimate PM2.5 concentrations across China with remote sensing, meteorological and land use information.

    Science.gov (United States)

    Chen, Gongbo; Li, Shanshan; Knibbs, Luke D; Hamm, N A S; Cao, Wei; Li, Tiantian; Guo, Jianping; Ren, Hongyan; Abramson, Michael J; Guo, Yuming

    2018-04-24

    Machine learning algorithms have very high predictive ability. However, no study has used machine learning to estimate historical concentrations of PM 2.5 (particulate matter with aerodynamic diameter ≤ 2.5 μm) at daily time scale in China at a national level. To estimate daily concentrations of PM 2.5 across China during 2005-2016. Daily ground-level PM 2.5 data were obtained from 1479 stations across China during 2014-2016. Data on aerosol optical depth (AOD), meteorological conditions and other predictors were downloaded. A random forests model (non-parametric machine learning algorithms) and two traditional regression models were developed to estimate ground-level PM 2.5 concentrations. The best-fit model was then utilized to estimate the daily concentrations of PM 2.5 across China with a resolution of 0.1° (≈10 km) during 2005-2016. The daily random forests model showed much higher predictive accuracy than the other two traditional regression models, explaining the majority of spatial variability in daily PM 2.5 [10-fold cross-validation (CV) R 2  = 83%, root mean squared prediction error (RMSE) = 28.1 μg/m 3 ]. At the monthly and annual time-scale, the explained variability of average PM 2.5 increased up to 86% (RMSE = 10.7 μg/m 3 and 6.9 μg/m 3 , respectively). Taking advantage of a novel application of modeling framework and the most recent ground-level PM 2.5 observations, the machine learning method showed higher predictive ability than previous studies. Random forests approach can be used to estimate historical exposure to PM 2.5 in China with high accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  14. Some examples of the estimation of error for calorimetric assay of plutonium-bearing solids

    International Nuclear Information System (INIS)

    Rodenburg, W.W.

    1977-04-01

    This report provides numerical examples of error estimation and related measurement assurance programs for the calorimetric assay of plutonium. It is primarily intended for users who do not consider themselves experts in the field of calorimetry. These examples will provide practical and useful information in establishing a calorimetric assay capability which fulfills regulatory requirements. 10 tables, 5 figures

  15. Can i just check...? Effects of edit check questions on measurement error and survey estimates

    NARCIS (Netherlands)

    Lugtig, Peter; Jäckle, Annette

    2014-01-01

    Household income is difficult to measure, since it requires the collection of information about all potential income sources for each member of a household.Weassess the effects of two types of edit check questions on measurement error and survey estimates: within-wave edit checks use responses to

  16. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  17. GUM approach to uncertainty estimations for online 220Rn concentration measurements using Lucas scintillation cell

    International Nuclear Information System (INIS)

    Sathyabama, N.

    2014-01-01

    It is now widely recognized that, when all of the known or suspected components of errors have been evaluated and corrected, there still remains an uncertainty, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured. Evaluation of measurement data - Guide to the expression of Uncertainty in Measurement (GUM) is a guidance document, the purpose of which is to promote full information on how uncertainty statements are arrived at and to provide a basis for the international comparison of measurement results. In this paper, uncertainty estimations following GUM guidelines have been made for the measured values of online thoron concentrations using Lucas scintillation cell to prove that the correction for disequilibrium between 220 Rn and 216 Po is significant in online 220 Rn measurements

  18. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas; Tempone, Raul

    2014-01-01

    jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time

  19. Error Analysis on the Estimation of Cumulative Infiltration in Soil Using Green and AMPT Model

    Directory of Open Access Journals (Sweden)

    Muhamad Askari

    2006-08-01

    Full Text Available Green and Ampt infiltration model is still useful for the infiltration process because of a clear physical basis of the model and of the existence of the model parameter values for a wide range of soil. The objective of thise study was to analyze error on the esimation of cumulative infiltration in sooil using Green and Ampt model and to design laboratory experiment in measuring cumulative infiltration. Parameter of the model was determined based on soil physical properties from laboratory experiment. Newton –Raphson method was esed to estimate wetting front during calculation using visual Basic for Application (VBA in MS Word. The result showed that  contributed the highest error in estimation of cumulative infiltration and was followed by K, H0, H1, and t respectively. It also showed that the calculated cumulative infiltration is always lower than both measured cumulative infiltration and volumetric soil water content.

  20. Analysis of a HP-refinement method for solving the neutron transport equation using two error estimators

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.

    2011-01-01

    The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)

  1. Analysis of an a posteriori error estimator for the transport equation with SN and discontinuous Galerkin discretizations

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.

    2011-01-01

    We present an error estimator for the S N neutron transport equation discretized with an arbitrary high-order discontinuous Galerkin method. As a starting point, the estimator is obtained for conforming Cartesian meshes with a uniform polynomial order for the trial space then adapted to deal with non-conforming meshes and a variable polynomial order. Some numerical tests illustrate the properties of the estimator and its limitations. Finally, a simple shielding benchmark is analyzed in order to show the relevance of the estimator in an adaptive process.

  2. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations

    International Nuclear Information System (INIS)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use

  3. Development of a framework to estimate human error for diagnosis tasks in advanced control room

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, In Seok; Seong, Proong Hyun

    2014-01-01

    In the emergency situation of nuclear power plants (NPPs), a diagnosis of the occurring events is crucial for managing or controlling the plant to a safe and stable condition. If the operators fail to diagnose the occurring events or relevant situations, their responses can eventually inappropriate or inadequate Accordingly, huge researches have been performed to identify the cause of diagnosis error and estimate the probability of diagnosis error. D.I Gertman et al. asserted that 'the cognitive failures stem from erroneous decision-making, poor understanding of rules and procedures, and inadequate problem solving and this failures may be due to quality of data and people's capacity for processing information'. Also many researchers have asserted that human-system interface (HSI), procedure, training and available time are critical factors to cause diagnosis error. In nuclear power plants, a diagnosis of the event is critical for safe condition of the system. As advanced main control room is being adopted in nuclear power plants, the operators may obtain the plant data via computer-based HSI and procedure. Also many researchers have asserted that HSI, procedure, training and available time are critical factors to cause diagnosis error. In this regards, using simulation data, diagnosis errors and its causes were identified. From this study, some useful insights to reduce diagnosis errors of operators in advanced main control room were provided

  4. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  5. The detector response simulation for the CBM silicon tracking system as a tool for hit error estimation

    Energy Technology Data Exchange (ETDEWEB)

    Malygina, Hanna [Goethe Universitaet Frankfurt (Germany); KINR, Kyiv (Ukraine); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Friese, Volker; Zyzak, Maksym [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The Compressed Baryonic Matter experiment(CBM) at FAIR is designed to explore the QCD phase diagram in the region of high net-baryon densities. As the central detector component, the Silicon Tracking System (STS) is based on double-sided micro-strip sensors. To achieve realistic modelling, the response of the silicon strip sensors should be precisely included in the digitizer which simulates a complete chain of physical processes caused by charged particles traversing the detector, from charge creation in silicon to a digital output signal. The current implementation of the STS digitizer comprises non-uniform energy loss distributions (according to the Urban theory), thermal diffusion and charge redistribution over the read-out channels due to interstrip capacitances. Using the digitizer, one can test an influence of each physical processes on hit error separately. We have developed a new cluster position finding algorithm and a hit error estimation method for it. Estimated errors were verified by the width of pull distribution (expected to be about unity) and its shape.

  6. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    Science.gov (United States)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  7. -Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Lee HyunYoung

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  8. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  9. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  10. Evaluating EIV, OLS, and SEM Estimators of Group Slope Differences in the Presence of Measurement Error: The Single-Indicator Case

    Science.gov (United States)

    Culpepper, Steven Andrew

    2012-01-01

    Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…

  11. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  12. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    Science.gov (United States)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  13. In vitro-in vivo extrapolation: estimation of human serum concentrations of chemicals equivalent to cytotoxic concentrations in vitro

    International Nuclear Information System (INIS)

    Guelden, Michael; Seibert, Hasso

    2003-01-01

    In the present study an extrapolation model for estimating serum concentrations of chemicals equivalent to in vitro effective concentrations is developed and applied to median cytotoxic concentrations (EC 50 ) determined in vitro. Nominal concentrations of a chemical in serum and in vitro are regarded as equivalent, if they result in the same aqueous concentration of the unbound form. The algorithm used is based on equilibrium distribution and requires albumin binding data, the octanol-water partition coefficient (K ow ), and the albumin concentrations and lipid volume fractions in vitro and in serum. The chemicals studied cover wide ranges of cytotoxic potency (EC 50 : 2.5-530000 μM) and lipophilicity (log K ow : -5 to 7). Their albumin binding characteristics have been determined by means of an in vitro cytotoxicity test as described previously. The equivalent serum concentrations of 19 of the 33 compounds investigated, having high protein binding and/or lipophilicity, were substantially higher than the EC 50 -values, by factors of 2.5-58. Prominent deviations between the equivalent nominal concentrations in serum and in vitro were largely restricted to chemicals with higher cytotoxic potency (EC 50 ≤1000 μM). The results suggest that estimates of equivalent serum concentrations based on in vitro data are robust for chemicals with low lipophilicity (log K ow ≤2) and low potency (EC 50 >1000 μM). With more potent chemicals or those with higher lipophilicity partitioning into lipids and/or binding to serum proteins have to be taken into account when estimating in vivo serum concentrations equivalent to in vitro effective concentrations

  14. An error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle

    Energy Technology Data Exchange (ETDEWEB)

    Hauser, Eliete Biasotto [Faculty of Mathematics, PUCRS Av Ipiranga 6681, Building 15, Porto Alegre - RS 90619-900 (Brazil)]. E-mail: eliete@pucrs.br; Pazos, Ruben Panta [Department of Mathematics, UNISC Av Independencia, 2293, room 1301, Santa Cruz do Sul - RS 96815-900 (Brazil)]. E-mail: rpp@impa.br; Tullio de Vilhena, Marco [Graduate Program in Applied Mathematics, UFRGS Av Bento Goncalves 9500, Building 43-111, Porto Alegre - RS 91509-900 (Brazil)]. E-mail: vilhena@mat.ufrgs.br

    2005-07-15

    In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle. For such we present an efficient algorithm, called LTS {sub N} 2D-Diag solution for Cartesian geometry.

  15. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  16. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation

    Directory of Open Access Journals (Sweden)

    Alexandre Bryan Heinemann

    2012-01-01

    Full Text Available Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, biomass, leaf area (LAI and total accumulated solar radiation (SRA during the crop cycle. The accuracy of the 5 models for estimated daily solar radiation was similar, and it was not substantially different among sites. For water limited environments (no irrigation, crop model outputs yield, biomass and LAI was not sensitive for the uncertainties in radiation models studied here.

  17. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  18. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  19. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  20. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  1. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  3. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  4. Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.

    Science.gov (United States)

    Keating, G A; Bogen, K T

    2001-01-01

    Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.

  5. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  6. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    Science.gov (United States)

    Aurell, Erik

    2018-04-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  7. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    Science.gov (United States)

    Aurell, Erik

    2018-06-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  8. Estimating Terrestrial Wood Biomass from Observed Concentrations of Atmospheric Carbon Dioxide

    NARCIS (Netherlands)

    Schaefer, K. M.; Peters, W.; Carvalhais, N.; van der Werf, G.; Miller, J.

    2008-01-01

    We estimate terrestrial disequilibrium state and wood biomass from observed concentrations of atmospheric CO2 using the CarbonTracker system coupled to the SiBCASA biophysical model. Starting with a priori estimates of carbon flux from the land, ocean, and fossil fuels, CarbonTracker estimates net

  9. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  10. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  11. Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2009-05-01

    Full Text Available The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured.We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence.If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant.The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.

  12. An empirical comparison of effective concentration estimators for evaluating aquatic toxicity test responses

    Energy Technology Data Exchange (ETDEWEB)

    Bailer, A.J.; Hughes, M.R.; Denton, D.L.; Oris, J.T.

    2000-01-01

    Aquatic toxicity tests are statistically evaluated by either hypothesis testing procedures to derive a no-observed-effect concentration or by inverting regression models to calculate the concentration associated with a specific reduction from the control response. These latter methods can be described as potency estimation methods. Standard US Environmental Protection Agency (USEPA) potency estimation methods are based on two different techniques. For continuous or count response data, a nominally nonparametric method that assumes monotonic decreasing responses and piecewise linear patterns between successive concentration groups is used. For quantal responses, a probit regression model with a linear dose term is fit. These techniques were compared with a recently developed parametric regression-based estimator, the relative inhibition estimator, RIp. This method is based on fitting generalized linear models, followed by estimation of the concentration associated with a particular decrement relative to control responses. These estimators, with levels of inhibition (p) of 25 and 50%, were applied to a series of chronic toxicity tests in a US EPA region 9 database of reference toxicity tests. Biological responses evaluated in these toxicity tests included the number of young produced in three broods by the water flea (Ceriodaphnia dubia) and germination success and tube length data from the giant kelp (Macrocystis pyrifera). The greatest discrepancy between the RIp and standard US EPA estimators was observed for C. dubia. The concentration-response pattern for this biological endpoint exhibited nonmonotonicity more frequently than for any of the other endpoint. Future work should consider optimal experimental designs to estimate these quantities, methods for constructing confidence intervals, and simulation studies to explore the behavior of these estimators under known conditions.

  13. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    Science.gov (United States)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  14. Mineral concentrations in diets, water, and milk and their value in estimating on-farm excretion of manure minerals in lactating dairy cows.

    Science.gov (United States)

    Castillo, A R; St-Pierre, N R; Silva del Rio, N; Weiss, W P

    2013-05-01

    of minerals in milk with NRC constants resulted in reduced estimated excretion of Ca, Na, Cu, Fe, and Zn, but median differences were mineral intake via drinking water and not using assayed concentrations of milk minerals lead to errors in estimation manure excretion of minerals (e.g., Ca, Na, Cl, and S). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Performance analysis of amplify-and-forward two-way relaying with co-channel interference and channel estimation error

    KAUST Repository

    Yang, Liang

    2013-04-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation error and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use these expressions to evaluate the outage probability and error probability. Numerical results show that the approximate closed-form expressions are very close to the exact ones. © 2013 IEEE.

  16. A generalized adjoint framework for sensitivity and global error estimation in time-dependent nuclear reactor simulations

    International Nuclear Information System (INIS)

    Stripling, H.F.; Anitescu, M.; Adams, M.L.

    2013-01-01

    Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint

  17. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    Science.gov (United States)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  18. Unconditional convergence and error estimates for bounded numerical solutions of the barotropic Navier-Stokes system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATH EF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier–Stokes system Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract

  19. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    Science.gov (United States)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  20. A Statistical Algorithm for Estimating Chlorophyll Concentration in the New Caledonian Lagoon

    Directory of Open Access Journals (Sweden)

    Guillaume Wattelez

    2016-01-01

    Full Text Available Spatial and temporal dynamics of phytoplankton biomass and water turbidity can provide crucial information about the function, health and vulnerability of lagoon ecosystems (coral reefs, sea grasses, etc.. A statistical algorithm is proposed to estimate chlorophyll-a concentration ([chl-a] in optically complex waters of the New Caledonian lagoon from MODIS-derived “remote-sensing” reflectance (Rrs. The algorithm is developed via supervised learning on match-ups gathered from 2002 to 2010. The best performance is obtained by combining two models, selected according to the ratio of Rrs in spectral bands centered on 488 and 555 nm: a log-linear model for low [chl-a] (AFLC and a support vector machine (SVM model or a classic model (OC3 for high [chl-a]. The log-linear model is developed based on SVM regression analysis. This approach outperforms the classical OC3 approach, especially in shallow waters, with a root mean squared error 30% lower. The proposed algorithm enables more accurate assessments of [chl-a] and its variability in this typical oligo- to meso-trophic tropical lagoon, from shallow coastal waters and nearby reefs to deeper waters and in the open ocean.

  1. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  2. Hydrometer test for estimation of immunoglobulin concentration in bovine colostrum.

    Science.gov (United States)

    Fleenor, W A; Stott, G H

    1980-06-01

    A practical field method for measuring immunoglobulin concentration in bovine colostrum has been developed from the linear relationship between colostral specific gravity and immunoglobulin concentration. Fourteen colostrums were collected within 24 h postpartum from nursed and unnursed cows and were assayed for specific gravity and major colostral constituents. Additionally, 15 colostrums were collected immediately postpartum prior to suckling and assayed for specific gravity and immunoglobulin concentration. Regression analysis provided an equation to estimate colostral immunoglobulin concentration from the specific gravity of fresh whole colostrum. From this, a colostrometer was developed for practical field use.

  3. Estimating ages of white-tailed deer: Age and sex patterns of error using tooth wear-and-replacement and consistency of cementum annuli

    Science.gov (United States)

    Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.

    2014-01-01

    The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information. 

  4. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  5. Sampling errors associated with soil composites used to estimate mean Ra-226 concentrations at an UMTRA remedial-action site

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.

    1987-07-01

    The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs

  6. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  7. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  8. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  9. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  10. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  11. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    Science.gov (United States)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and

  12. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  13. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  14. Direct tracking error characterization on a single-axis solar tracker

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; Pujol-Nadal, Ramon; Larcher, Marco; Rittmann-Frank, Mercedes Hannelore

    2015-01-01

    Highlights: • The solar tracker of a small-size parabolic trough collector was tested. • A testing procedure for the tracking error characterization of a single-axis tracker was proposed. • A statistical analysis on the tracking error distribution was done regarding different variables. • The optical losses due to the tracking error were calculated based on a ray-tracing simulation. - Abstract: The solar trackers are devices used to orientate solar concentrating systems in order to increase the focusing of the solar radiation on a receiver. A solar concentrator with a medium or high concentration ratio needs to be orientated correctly by an accurate solar tracking mechanism to avoid losing the sunrays out from the receiver. Hence, to obtain an appropriate operation, it is important to know the accuracy of a solar tracker in regard to the required precision of the concentrator in order to maximize the collector optical efficiency. A procedure for the characterization of the accuracy of a solar tracker is presented for a single-axis solar tracker. More precisely, this study focuses on the estimation of the positioning angle error of a parabolic trough collector using a direct procedure. A testing procedure, adapted from the International standard IEC 62817 for photovoltaic trackers, was defined. The results show that the angular tracking error was within ±0.4° for this tracker. The optical losses due to the tracking were calculated using the longitudinal incidence angle modifier obtained by ray-tracing simulation. The acceptance angles for various transversal angles were analyzed, and the average optical loss, due to the tracking, was 0.317% during the whole testing campaign. The procedure presented in this work showed that the tracker precision was adequate for the requirements of the analyzed optical system.

  15. Estimating spatiotemporal distribution of PM1 concentrations in China with satellite remote sensing, meteorology, and land use information.

    Science.gov (United States)

    Chen, Gongbo; Knibbs, Luke D; Zhang, Wenyi; Li, Shanshan; Cao, Wei; Guo, Jianping; Ren, Hongyan; Wang, Boguang; Wang, Hao; Williams, Gail; Hamm, N A S; Guo, Yuming

    2018-02-01

    PM 1 might be more hazardous than PM 2.5 (particulate matter with an aerodynamic diameter ≤ 1 μm and ≤2.5 μm, respectively). However, studies on PM 1 concentrations and its health effects are limited due to a lack of PM 1 monitoring data. To estimate spatial and temporal variations of PM 1 concentrations in China during 2005-2014 using satellite remote sensing, meteorology, and land use information. Two types of Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 6 aerosol optical depth (AOD) data, Dark Target (DT) and Deep Blue (DB), were combined. Generalised additive model (GAM) was developed to link ground-monitored PM 1 data with AOD data and other spatial and temporal predictors (e.g., urban cover, forest cover and calendar month). A 10-fold cross-validation was performed to assess the predictive ability. The results of 10-fold cross-validation showed R 2 and Root Mean Squared Error (RMSE) for monthly prediction were 71% and 13.0 μg/m 3 , respectively. For seasonal prediction, the R 2 and RMSE were 77% and 11.4 μg/m 3 , respectively. The predicted annual mean concentration of PM 1 across China was 26.9 μg/m 3 . The PM 1 level was highest in winter while lowest in summer. Generally, the PM 1 levels in entire China did not substantially change during the past decade. Regarding local heavy polluted regions, PM 1 levels increased substantially in the South-Western Hebei and Beijing-Tianjin region. GAM with satellite-retrieved AOD, meteorology, and land use information has high predictive ability to estimate ground-level PM 1 . Ambient PM 1 reached high levels in China during the past decade. The estimated results can be applied to evaluate the health effects of PM 1 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  17. Robust Online State of Charge Estimation of Lithium-Ion Battery Pack Based on Error Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Ting Zhao

    2015-01-01

    Full Text Available Accurate and reliable state of charge (SOC estimation is a key enabling technique for large format lithium-ion battery pack due to its vital role in battery safety and effective management. This paper tries to make three contributions to existing literatures through robust algorithms. (1 Observer based SOC estimation error model is established, where the crucial parameters on SOC estimation accuracy are determined by quantitative analysis, being a basis for parameters update. (2 The estimation method for a battery pack in which the inconsistency of cells is taken into consideration is proposed, ensuring all batteries’ SOC ranging from 0 to 1, effectively avoiding the battery overcharged/overdischarged. Online estimation of the parameters is also presented in this paper. (3 The SOC estimation accuracy of the battery pack is verified using the hardware-in-loop simulation platform. The experimental results at various dynamic test conditions, temperatures, and initial SOC difference between two cells demonstrate the efficacy of the proposed method.

  18. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  19. Evapotranspiration estimates and consequences due to errors in the determination of the net radiation and advective effects

    International Nuclear Information System (INIS)

    Oliveira, G.M. de; Leitao, M. de M.V.B.R.

    2000-01-01

    The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt

  20. Water Quality in the Upper Anacostia River, Maryland: Continuous and Discrete Monitoring with Simulations to Estimate Concentrations and Yields, 2003-05

    Science.gov (United States)

    Miller, Cherie V.; Gutierrez-Magness, Angelica L.; Feit Majedi, Brenda L.; Foster, Gregory D.

    2007-01-01

    concentrations of total phosphorus and total nitrogen had lower values of multiple R2 than suspended sediment, but the estimated bias for all the models was similar. The models for total nitrogen and total phosphorus tended to under-predict high concentrations and to over-predict low concentrations as compared to measured values. Annual yields (loads per square area in kilograms per year per square kilometer) were estimated for suspended sediment, total nitrogen, and total phosphorus using the U.S. Geological Survey models ESTIMATOR and LOADEST. The model LOADEST used hourly time steps and allowed the use of turbidity, which is strongly correlated to concentrations of suspended sediment, as a predictor variable. Annual yields for total nitrogen and total phosphorus were slightly higher but similar to previous estimates for other watersheds of the Chesapeake Bay, but annual yields for suspended sediment were higher by an order of magnitude for the two Anacostia River stations. Annual yields of suspended sediment at the two Anacostia River stations ranged from 131,000 to 248,000 kilograms per year per square kilometer for 2004 and 2005. LOADEST estimates were similar to those determined with ESTIMATOR, but had reduced errors associated with the estimates.

  1. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  2. Estimating soil solution nitrate concentration from dielectric spectra using PLS analysis

    Science.gov (United States)

    Fast and reliable methods for in situ monitoring of soil nitrate-nitrogen concentration are vital for reducing nitrate-nitrogen losses to ground and surface waters from agricultural systems. While several studies have been done to indirectly estimate nitrate-nitrogen concentration from time domain s...

  3. Pollution concentration estimates in ecologically important zones

    Energy Technology Data Exchange (ETDEWEB)

    Skiba, Y.N. [Mexico City Univ. (Mexico). Center for Atmospheric Sciences

    1995-12-31

    Method based on using the pollutant transport equation and the adjoint technique is described here for estimating the pollutant concentration level in ecologically important zones. The method directly relates the pollution level in such zones with the power of the pollution sources and the initial pollution field. Assuming that the wind or current velocities are known (from climatic data or dynamic model), the main and adjoint pollutant transport equations can be considered in a limited area to solve such theoretically and practically important problems as: (1) optimal location of new industries in a given region with the aim to minimize the pollution concentration in certain ecologically important zones, (2) optimization of emissions from operating industries, (3) detection of the plants violating sanitary regulations, (4) analysis of the emissions coming from the vehicle traffic (such emissions can be included in the model by means of the linear pollution sources located along the main roadways), (5) estimation of the oil pollution in various ecologically important oceanic (sea) zones in case of accident with the oil tanker, (6) evaluation of the sea water desalination level in estuary regions, and others. These equations considered in a spherical shell domain can also be applied to the problems of transporting the pollutants from a huge industrial complex, or from the zone of an ecological catastrophe similar to the Chernobyl one

  4. Pollution concentration estimates in ecologically important zones

    Energy Technology Data Exchange (ETDEWEB)

    Skiba, Y N [Mexico City Univ. (Mexico). Center for Atmospheric Sciences

    1996-12-31

    Method based on using the pollutant transport equation and the adjoint technique is described here for estimating the pollutant concentration level in ecologically important zones. The method directly relates the pollution level in such zones with the power of the pollution sources and the initial pollution field. Assuming that the wind or current velocities are known (from climatic data or dynamic model), the main and adjoint pollutant transport equations can be considered in a limited area to solve such theoretically and practically important problems as: (1) optimal location of new industries in a given region with the aim to minimize the pollution concentration in certain ecologically important zones, (2) optimization of emissions from operating industries, (3) detection of the plants violating sanitary regulations, (4) analysis of the emissions coming from the vehicle traffic (such emissions can be included in the model by means of the linear pollution sources located along the main roadways), (5) estimation of the oil pollution in various ecologically important oceanic (sea) zones in case of accident with the oil tanker, (6) evaluation of the sea water desalination level in estuary regions, and others. These equations considered in a spherical shell domain can also be applied to the problems of transporting the pollutants from a huge industrial complex, or from the zone of an ecological catastrophe similar to the Chernobyl one

  5. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  6. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  7. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  8. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  9. Method of estimating changes in vapor concentrations continuously generated from two-component organic solvents.

    Science.gov (United States)

    Hori, Hajime; Ishidao, Toru; Ishimatsu, Sumiyo

    2010-12-01

    We measured vapor concentrations continuously evaporated from two-component organic solvents in a reservoir and proposed a method to estimate and predict the evaporation rate or generated vapor concentrations. Two kinds of organic solvents were put into a small reservoir made of glass (3 cm in diameter and 3 cm high) that was installed in a cylindrical glass vessel (10 cm in diameter and 15 cm high). Air was introduced into the glass vessel at a flow rate of 150 ml/min, and the generated vapor concentrations were intermittently monitored for up to 5 hours with a gas chromatograph equipped with a flame ionization detector. The solvent systems tested in this study were the methanoltoluene system and the ethyl acetate-toluene system. The vapor concentrations of the more volatile component, that is, methanol in the methanol-toluene system and ethyl acetate in the ethyl acetate-toluene system, were high at first, and then decreased with time. On the other hand, the concentrations of the less volatile component were low at first, and then increased with time. A model for estimating multicomponent organic vapor concentrations was developed, based on a theory of vapor-liquid equilibria and a theory of the mass transfer rate, and estimated values were compared with experimental ones. The estimated vapor concentrations were in relatively good agreement with the experimental ones. The results suggest that changes in concentrations of two-component organic vapors continuously evaporating from a liquid reservoir can be estimated by the proposed model.

  10. Accuracy of crystal structure error estimates

    International Nuclear Information System (INIS)

    Taylor, R.; Kennard, O.

    1986-01-01

    A statistical analysis of 100 crystal structures retrieved from the Cambridge Structural Database is reported. Each structure has been determined independently by two different research groups. Comparison of the independent results leads to the following conclusions: (a) The e.s.d.'s of non-hydrogen-atom positional parameters are almost invariably too small. Typically, they are underestimated by a factor of 1.4-1.45. (b) The extent to which e.s.d.'s are underestimated varies significantly from structure to structure and from atom to atom within a structure. (c) Errors in the positional parameters of atoms belonging to the same chemical residue tend to be positively correlated. (d) The e.s.d.'s of heavy-atom positions are less reliable than those of light-atom positions. (e) Experimental errors in atomic positional parameters are normally, or approximately normally, distributed. (f) The e.s.d.'s of cell parameters are grossly underestimated, by an average factor of about 5 for cell lengths and 2.5 for cell angles. There is marginal evidence that the accuracy of atomic-coordinate e.s.d.'s also depends on diffractometer geometry, refinement procedure, whether or not the structure has a centre of symmetry, and the degree of precision attained in the structure determination. (orig.)

  11. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  12. Bayesian Estimation and Selection of Nonlinear Vector Error Correction Models: The Case of the Sugar-Ethanol-Oil Nexus in Brazil

    OpenAIRE

    Kelvin Balcombe; George Rapsomanikis

    2008-01-01

    Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest ...

  13. Threshold-based detection for amplify-and-forward cooperative communication systems with channel estimation error

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-09-01

    Efficient receiver designs for cooperative communication systems are becoming increasingly important. In previous work, cooperative networks communicated with the use of $L$ relays. As the receiver is constrained, it can only process $U$ out of $L$ relays. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this paper, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal $U-{opt}$. Furthermore, this receiver provides the freedom to choose $U ≤ U-{opt}$ for each frame depending upon the tolerable difference allowed for mean square error (MSE). Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings without affecting the BER performance of the system. Furthermore, in this paper the effect of channel estimation errors is investigated on the MSE performance of the amplify-and-forward (AF) cooperative relaying system.

  14. Neutron data error estimate of criticality calculations for lattice in shielding containers with metal fissionable materials

    International Nuclear Information System (INIS)

    Vasil'ev, A.P.; Krepkij, A.S.; Lukin, A.V.; Mikhal'kova, A.G.; Orlov, A.I.; Perezhogin, V.D.; Samojlova, L.Yu.; Sokolov, Yu.A.; Terekhin, V.A.; Chernukhin, Yu.I.

    1991-01-01

    Critical mass experiments were performed using assemblies which simulated one-dimensional lattice consisting of shielding containers with metal fissile materials. Calculations of the criticality of the above assemblies were carried out using the KLAN program with the BAS neutron constants. Errors in the calculations of the criticality for one-, two-, and three-dimensional lattices are estimated. 3 refs.; 1 tab

  15. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  16. Mean total arsenic concentrations in chicken 1989-2000 and estimated exposures for consumers of chicken.

    OpenAIRE

    Lasky, Tamar; Sun, Wenyu; Kadry, Abdel; Hoffman, Michael K

    2004-01-01

    The purpose of this study was to estimate mean concentrations of total arsenic in chicken liver tissue and then estimate total and inorganic arsenic ingested by humans through chicken consumption. We used national monitoring data from the Food Safety and Inspection Service National Residue Program to estimate mean arsenic concentrations for 1994-2000. Incorporating assumptions about the concentrations of arsenic in liver and muscle tissues as well as the proportions of inorganic and organic a...

  17. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  18. Unified theory to evaluate the effect of concentration difference and Peclet number on electroosmotic mobility error of micro electroosmotic flow

    KAUST Repository

    Wang, Wentao; Lee, Yi Kuen

    2012-01-01

    Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model

  19. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Science.gov (United States)

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  20. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  1. Towards better error statistics for atmospheric inversions of methane surface fluxes

    Directory of Open Access Journals (Sweden)

    A. Berchet

    2013-07-01

    Full Text Available We adapt general statistical methods to estimate the optimal error covariance matrices in a regional inversion system inferring methane surface emissions from atmospheric concentrations. Using a minimal set of physical hypotheses on the patterns of errors, we compute a guess of the error statistics that is optimal in regard to objective statistical criteria for the specific inversion system. With this very general approach applied to a real-data case, we recover sources of errors in the observations and in the prior state of the system that are consistent with expert knowledge while inferred from objective criteria and with affordable computation costs. By not assuming any specific error patterns, our results depict the variability and the inter-dependency of errors induced by complex factors such as the misrepresentation of the observations in the transport model or the inability of the model to reproduce well the situations of steep gradients of concentrations. Situations with probable significant biases (e.g., during the night when vertical mixing is ill-represented by the transport model can also be diagnosed by our methods in order to point at necessary improvement in a model. By additionally analysing the sensitivity of the inversion to each observation, guidelines to enhance data selection in regional inversions are also proposed. We applied our method to a recent significant accidental methane release from an offshore platform in the North Sea and found methane fluxes of the same magnitude than what was officially declared.

  2. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  3. Estimation of local concentration from measurements of stochastic adsorption dynamics using carbon nanotube-based sensors

    International Nuclear Information System (INIS)

    Jang, Hong; Lee, Jay H.; Braatz, Richard D.

    2016-01-01

    This paper proposes a maximum likelihood estimation (MLE) method for estimating time varying local concentration of the target molecule proximate to the sensor from the time profile of monomolecular adsorption and desorption on the surface of the sensor at nanoscale. Recently, several carbon nanotube sensors have been developed that can selectively detect target molecules at a trace concentration level. These sensors use light intensity changes mediated by adsorption or desorption phenomena on their surfaces. The molecular events occurring at trace concentration levels are inherently stochastic, posing a challenge for optimal estimation. The stochastic behavior is modeled by the chemical master equation (CME), composed of a set of ordinary differential equations describing the time evolution of probabilities for the possible adsorption states. Given the significant stochastic nature of the underlying phenomena, rigorous stochastic estimation based on the CME should lead to an improved accuracy over than deterministic estimation formulated based on the continuum model. Motivated by this expectation, we formulate the MLE based on an analytical solution of the relevant CME, both for the constant and the time-varying local concentrations, with the objective of estimating the analyte concentration field in real time from the adsorption readings of the sensor array. The performances of the MLE and the deterministic least squares are compared using data generated by kinetic Monte Carlo (KMC) simulations of the stochastic process. Some future challenges are described for estimating and controlling the concentration field in a distributed domain using the sensor technology.

  4. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  5. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  6. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  7. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  8. Point-of-care estimation of haemoglobin concentration in all age ...

    African Journals Online (AJOL)

    Point-of-care estimation of haemoglobin concentration in all age groups in clinical ... and the results were compared using standard scatter and difference plots. ... Hb measurements with a smaller sample volume, improved turnaround time, ...

  9. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    Science.gov (United States)

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  10. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....

  11. L2-Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Hyun Young Lee

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  12. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  13. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  14. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  15. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  17. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  18. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  19. Effects of self-absorption on simultaneous estimation of temperature distribution and concentration fields of soot and metal-oxide nanoparticles in nanofluid fuel flames using a spectrometer

    Science.gov (United States)

    Liu, Guannan; Liu, Dong

    2018-06-01

    An improved inverse reconstruction model with consideration of self-absorption effect for the temperature distribution and concentration fields of soot and metal-oxide nanoparticles in nanofluid fuel flames was proposed based on the flame emission spectrometry. The effects of self-absorption on the temperature profile and concentration fields were investigated for various measurement errors, flame optical thicknesses and detecting lines numbers. The model neglecting the self-absorption caused serious reconstruction errors especially in the nanofluid fuel flames with large optical thicknesses, while the improved model was used to successfully recover the temperature distribution and concentration fields of soot and metal-oxide nanoparticles for the flames regardless of the optical thickness. Through increasing detecting lines number, the reconstruction accuracy can be greatly improved due to more flame emission information received by the spectrometer. With the adequate detecting lines number, the estimations for the temperature distribution and concentration fields of soot and metal-oxide nanoparticles in flames with large optical thicknesses were still satisfying even from the noisy radiation intensities with signal to noise ratio (SNR) as low as 46 dB. The results showed that the improved reconstruction model was effective and robust to concurrently retrieve the temperature distribution and volume fraction fields of soot and metal-oxide nanoparticles for the exact and noisy data in nanofluid fuel sooting flames with different optical thicknesses.

  20. Expert estimation of human error probabilities in nuclear power plant operations: a review of probability assessment and scaling

    International Nuclear Information System (INIS)

    Stillwell, W.G.; Seaver, D.A.; Schwartz, J.P.

    1982-05-01

    This report reviews probability assessment and psychological scaling techniques that could be used to estimate human error probabilities (HEPs) in nuclear power plant operations. The techniques rely on expert opinion and can be used to estimate HEPs where data do not exist or are inadequate. These techniques have been used in various other contexts and have been shown to produce reasonably accurate probabilities. Some problems do exist, and limitations are discussed. Additional topics covered include methods for combining estimates from multiple experts, the effects of training on probability estimates, and some ideas on structuring the relationship between performance shaping factors and HEPs. Preliminary recommendations are provided along with cautions regarding the costs of implementing the recommendations. Additional research is required before definitive recommendations can be made

  1. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  2. Estimation of radon concentration in dwellings in and around ...

    Indian Academy of Sciences (India)

    Besides, it is also known that out of the total radiation dose received from natural and man-made sources, 60% of the dose is due to radon and its progeny. Taking this into account, an attempt has been made to estimate radon concentration in dwellings in and around Guwahati using aluminium dosimeter cups with CR-39 ...

  3. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  4. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  5. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  6. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  7. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  8. Positional accommodative intraocular lens power error induced by the estimation of the corneal power and the effective lens position

    Directory of Open Access Journals (Sweden)

    David P Piñero

    2015-01-01

    Full Text Available Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP. Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52-77 years and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (P IOLadj based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (n kadj for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELP adj . P IOLadj was compared to the real IOL power implanted (P IOLReal , calculated with the SRK-T formula and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between P IOLReal and P IOLadj when ELP adj was used (P = 0.10, with a range of agreement between calculations of 1.23 D. In contrast, P IOLReal was significantly higher when compared to P IOLadj without using ELP adj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age.

  9. Bayesian estimation of a proportion under an asymmetric observation error Estimación bayesiana de una proporción bajo error de estimación asimétrico

    Directory of Open Access Journals (Sweden)

    Juan Carlos Correa Morales

    2012-06-01

    Full Text Available The process of estimating a proportion that is associated with a sensitive question can yield responses that are not necessarily according to the reality.To reduce the probability o false response to this kind of sensitive questions some authors have proposed techniques of randomized response assuming asymmetric observation error. In this paper we present a generalization of the case where a symmetric error is assumed since this assumption could be unrealistic in practice. Under the assumption of an assymetric error the likelihood function is built. By doing this we intend that in practice the final user hasan alternative method to reduce the probability of false response. Assuming informative a priori distributions an expresion for the posterior distribution is found. Since this posterior distribution does not have a closed mathematical expression, it is neccesary to use the Gibbs sampler to carry out the estimation process. This technique is illustrated using real data about drug consumptions that were collected by the Oficina de Bienestar from the Universidad Nacional de Colombia at Medellín.El proceso de estimación de una proporción relacionada con una pregunta que puede ser altamente sensible para el encuestado, puede generar respuestas que no necesariamente coinciden con la realidad. Para reducir la probabilidad de respuestas falsas a este tipo de preguntas algunos autores han propuesto técnicas de respuesta aleatorizada asumiendo un error de observación asimétrico. En este artículo se presenta una generalización al caso donde se asume un error simétrico lo cual puede ser un supuesto poco realista en la práctica. Se deduce la función de verosimilitud bajo el supuesto de error de estimación asimétrico.Con esto se pretende que en la práctica se cuente con un método alternativo para reducir la probabilidad de respuestas falsas. Asumiendo distribuciones a priori informativas se encuentra una expresión para la distribuci

  10. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Accurate measurement of peripheral blood mononuclear cell concentration using image cytometry to eliminate RBC-induced counting error.

    Science.gov (United States)

    Chan, Leo Li-Ying; Laverty, Daniel J; Smith, Tim; Nejad, Parham; Hei, Hillary; Gandhi, Roopali; Kuksin, Dmitry; Qiu, Jean

    2013-02-28

    Peripheral blood mononuclear cells (PBMCs) have been widely researched in the fields of immunology, infectious disease, oncology, transplantation, hematological malignancy, and vaccine development. Specifically, in immunology research, PBMCs have been utilized to monitor concentration, viability, proliferation, and cytokine production from immune cells, which are critical for both clinical trials and biomedical research. The viability and concentration of isolated PBMCs are traditionally measured by manual counting with trypan blue (TB) using a hemacytometer. One of the common issues of PBMC isolation is red blood cell (RBC) contamination. The RBC contamination can be dependent on the donor sample and/or technical skill level of the operator. RBC contamination in a PBMC sample can introduce error to the measured concentration, which can pass down to future experimental assays performed on these cells. To resolve this issue, RBC lysing protocol can be used to eliminate potential error caused by RBC contamination. In the recent years, a rapid fluorescence-based image cytometry system has been utilized for bright-field and fluorescence imaging analysis of cellular characteristics (Nexcelom Bioscience LLC, Lawrence, MA). The Cellometer image cytometry system has demonstrated the capability of automated concentration and viability detection in disposable counting chambers of unpurified mouse splenocytes and PBMCs stained with acridine orange (AO) and propidium iodide (PI) under fluorescence detection. In this work, we demonstrate the ability of Cellometer image cytometry system to accurately measure PBMC concentration, despite RBC contamination, by comparison of five different total PBMC counting methods: (1) manual counting of trypan blue-stained PBMCs in hemacytometer, (2) manual counting of PBMCs in bright-field images, (3) manual counting of acetic acid lysing of RBCs with TB-stained PBMCs, (4) automated counting of acetic acid lysing of RBCs with PI-stained PBMCs

  12. Estimations of natural variability between satellite measurements of trace species concentrations

    Science.gov (United States)

    Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.

    2017-12-01

    In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.

  13. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya; Tavener, Simon; Kay, David

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  14. Regularization and error estimates for asymmetric backward nonhomogeneous heat equations in a ball

    Directory of Open Access Journals (Sweden)

    Le Minh Triet

    2016-09-01

    Full Text Available The backward heat problem (BHP has been researched by many authors in the last five decades; it consists in recovering the initial distribution from the final temperature data. There are some articles [1,2,3] related the axi-symmetric BHP in a disk but the study in spherical coordinates is rare. Therefore, we wish to study a backward problem for nonhomogenous heat equation associated with asymmetric final data in a ball. In this article, we modify the quasi-boundary value method to construct a stable approximate solution for this problem. As a result, we obtain regularized solution and a sharp estimates for its error. At the end, a numerical experiment is provided to illustrate our method.

  15. Estimated effects of temperature on secondary organic aerosol concentrations.

    Science.gov (United States)

    Sheehan, P E; Bowman, F M

    2001-06-01

    The temperature-dependence of secondary organic aerosol (SOA) concentrations is explored using an absorptive-partitioning model under a variety of simplified atmospheric conditions. Experimentally determined partitioning parameters for high yield aromatics are used. Variation of vapor pressures with temperature is assumed to be the main source of temperature effects. Known semivolatile products are used to define a modeling range of vaporization enthalpy of 10-25 kcal/mol-1. The effect of diurnal temperature variations on model predictions for various assumed vaporization enthalpies, precursor emission rates, and primary organic concentrations is explored. Results show that temperature is likely to have a significant influence on SOA partitioning and resulting SOA concentrations. A 10 degrees C decrease in temperature is estimated to increase SOA yields by 20-150%, depending on the assumed vaporization enthalpy. In model simulations, high daytime temperatures tend to reduce SOA concentrations by 16-24%, while cooler nighttime temperatures lead to a 22-34% increase, compared to constant temperature conditions. Results suggest that currently available constant temperature partitioning coefficients do not adequately represent atmospheric SOA partitioning behavior. Air quality models neglecting the temperature dependence of partitioning are expected to underpredict peak SOA concentrations as well as mistime their occurrence.

  16. Using marginal structural measurement-error models to estimate the long-term effect of antiretroviral therapy on incident AIDS or death.

    Science.gov (United States)

    Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.

  17. Nonlinear error dynamics for cycled data assimilation methods

    International Nuclear Information System (INIS)

    Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan

    2013-01-01

    We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)

  18. Telemetry location error in a forested habitat

    Science.gov (United States)

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  19. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  20. Characterization of mixing errors in a coupled physical biogeochemical model of the North Atlantic: implications for nonlinear estimation using Gaussian anamorphosis

    Directory of Open Access Journals (Sweden)

    D. Béal

    2010-02-01

    Full Text Available In biogeochemical models coupled to ocean circulation models, vertical mixing is an important physical process which governs the nutrient supply and the plankton residence in the euphotic layer. However, vertical mixing is often poorly represented in numerical simulations because of approximate parameterizations of sub-grid scale turbulence, wind forcing errors and other mis-represented processes such as restratification by mesoscale eddies. Getting a sufficient knowledge of the nature and structure of these errors is necessary to implement appropriate data assimilation methods and to evaluate if they can be controlled by a given observation system.

    In this paper, Monte Carlo simulations are conducted to study mixing errors induced by approximate wind forcings in a three-dimensional coupled physical-biogeochemical model of the North Atlantic with a 1/4° horizontal resolution. An ensemble forecast involving 200 members is performed during the 1998 spring bloom, by prescribing perturbations of the wind forcing to generate mixing errors. The biogeochemical response is shown to be rather complex because of nonlinearities and threshold effects in the coupled model. The response of the surface phytoplankton depends on the region of interest and is particularly sensitive to the local stratification. In addition, the statistical relationships computed between the various physical and biogeochemical variables reflect the signature of the non-Gaussian behaviour of the system. It is shown that significant information on the ecosystem can be retrieved from observations of chlorophyll concentration or sea surface temperature if a simple nonlinear change of variables (anamorphosis is performed by mapping separately and locally the ensemble percentiles of the distributions of each state variable on the Gaussian percentiles. The results of idealized observational updates (performed with perfect observations and neglecting horizontal correlations

  1. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  2. On-line estimator/detector design for a plutonium nitrate concentrator unit

    International Nuclear Information System (INIS)

    Candy, J.V.; Rozsa, R.B.

    1979-04-01

    In this report we consider the design of a nonlinear estimator to be used in conjunction with on-line detectors for a plutonium/concentrator. Using a complex state-of-the-art process model to simulate realistic data, we show that the estimator performance using a simplified process model is adequate over a wide range of operation. The estimator is used to simulate and characterize some on-line diversion detectors, i.e., detectors designed to indicate if some of the critical special nuclear material in process is stolen or diverted from the unit. Several different diversion scenarios are presented. Simulation results indicate that the estimators and detectors yielded reasonable performance for the scenarios investigated

  3. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  4. Uncertainty estimation and risk prediction in air quality

    International Nuclear Information System (INIS)

    Garaud, Damien

    2011-01-01

    This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr

  5. Measurement Error Affects Risk Estimates for Recruitment to the Hudson River Stock of Striped Bass

    Directory of Open Access Journals (Sweden)

    Dennis J. Dunning

    2002-01-01

    Full Text Available We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years. Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%. However, the risk decreased almost tenfold (0.032 if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009 and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006— an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  6. The Huber’s Method-based Gas Concentration Reconstruction in Multicomponent Gas Mixtures from Multispectral Laser Measurements under Noise Overshoot Conditions

    Directory of Open Access Journals (Sweden)

    V. A. Gorodnichev

    2016-01-01

    Full Text Available Laser gas analysers are the most promising for the rapid quantitative analysis of gaseous air pollution. A laser gas analysis problem is that there are instable results in reconstruction of gas mixture components concentration under real noise in the recorded laser signal. This necessitates using the special processing algorithms. When reconstructing the quantitative composition of multi-component gas mixtures from the multispectral laser measurements are efficiently used methods such as Tikhonov regularization, quasi-solution search, and finding of Bayesian estimators. These methods enable using the single measurement results to determine the quantitative composition of gas mixtures under measurement noise. In remote sensing the stationary gas formations or in laboratory analysis of the previously selected (when the gas mixture is stationary air samples the reconstruction procedures under measurement noise of gas concentrations in multicomponent mixtures can be much simpler. The paper considers a problem of multispectral laser analysis of stationary gas mixtures for which it is possible to conduct a series of measurements. With noise overshoots in the recorded laser signal (and, consequently, overshoots of gas concentrations determined by a single measurement must be used stable (robust estimation techniques for substantial reducing an impact of the overshoots on the estimate of required parameters. The paper proposes the Huber method to determine gas concentrations in multicomponent mixtures under signal overshoot. To estimate the value of Huber parameter and the efficiency of Huber's method to find the stable estimates of gas concentrations in multicomponent stationary mixtures from the laser measurements the mathematical modelling was conducted. Science & Education of the Bauman MSTU 108 The mathematical modelling results show that despite the considerable difference among the errors of the mixture gas components themselves a character of

  7. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    Science.gov (United States)

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  8. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  9. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  10. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  11. Methodology for estimating sodium aerosol concentrations during breeder reactor fires

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1985-01-01

    We have devised and applied a methodology for estimating the concentration of aerosols released at building surfaces and monitored at other building surface points. We have used this methodology to make calculations that suggest, for one air-cooled breeder reactor design, cooling will not be compromised by severe liquid-metal fires

  12. Studying the errors in the estimation of the variation of energy by the "patched-conics" model in the three-dimensional swing-by

    Science.gov (United States)

    Negri, Rodolfo Batista; Prado, Antonio Fernando Bertachini de Almeida; Sukhanov, Alexander

    2017-11-01

    The swing-by maneuver is a technique used to change the energy of a spacecraft by using a close approach in a celestial body. This procedure was used many times in real missions. Usually, the first approach to design this type of mission is based on the "patched-conics" model, which splits the maneuver into three "two-body dynamics." This approach causes an error in the estimation of the energy variations, which depends on the geometry of the maneuver and the system of primaries considered. Therefore, the goal of the present paper is to study the errors caused by this approximation. The comparison of the results are made with the trajectories obtained using the more realistic restricted three-body problem, assumed here to be the "real values" for the maneuver. The results shown here describe the effects of each parameter involved in the swing-by. Some examples using bodies in the solar system are used in this part of the paper. The study is then generalized to cover different mass parameters, and its influence is analyzed to give an idea of the amount of the error expected for a given system of primaries. The results presented here may help in estimating errors in the preliminary mission analysis using the "patched-conics" approach.

  13. Estimation of chromium (VI) in various body parts of local chicken

    International Nuclear Information System (INIS)

    Mahmud, T.; Rehman, R.; Anwar, J.; Abbas, A.; Farooq, M.

    2011-01-01

    Chicken is a common type of meat source in our food. It is fed with the feed containing small pieces of leather having Cr (VI) which persisted in it during chrome tanning process. The core purpose of present study was to determine the concentration of Cr (VI) in different body parts of chicken like leg, arm, head, heart, liver and bone. Estimation of Cr (VI) was done by preparing the sample solutions after ashing and digestion with nitric acid, by atomic absorption spectrophotometer. The results depicted that the meat part of leg had higher mean concentration (1.266 mg/kg) with 0.037 mg/kg standard error while the lowest average concentration was found in arm (0.233 mg/kg) with standard error as 0.019 mg/kg. In case of bones, the maximum mean concentration was found in head (1.433 mg/kg) with standard error as 0.670 mg/kg. The concentration of Cr (VI) was not found similar in meat and bones of chicken by employing Kruskal Wallis Test. (author)

  14. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  15. Study of dosimetry errors in the framework of a concerted international study about the risk of cancer in nuclear industry workers. Study of the errors made on dose estimations of 100 to 3000 keV photons

    International Nuclear Information System (INIS)

    Thierry Chef, I.

    2000-01-01

    Ionizing radiations are uncontested factors of cancer risk and the radioprotection standards are defined on the basis of epidemiological studies of persons exposed to high doses of radiations (atomic bombs and therapeutic medical exposures). An epidemiological study of cancer risk has been carried out on nuclear industry workers from 17 countries in order to check these standards and to directly evaluate the risk linked with long duration exposures to low doses. The techniques used to measure the workers' doses have changed with time and these evolutions have been different in the different countries considered. The study of dosimetry errors aims at estimating the compatibility of the doses with respect to the periods of time and to the countries, and at quantifying the errors that could have disturbed the dose measurements during the first years and their consideration in the risk estimation. A compilation of the information available about dosimetry in the participating countries has been performed and the main sources of errors have been identified. Experiments have been carried out to test the response of the dosimeters used and to evaluate the conditions of exposure inside the companies. The biases and uncertainties have been estimated per company and per period of time and the most important correspond to the oldest measurements performed. This study contributes also to improve the knowledge of the working conditions and of the preciseness of dose estimations. (J.S.)

  16. Active/passive microwave sensor comparison of MIZ-ice concentration estimates. [Marginal Ice Zone (MIZ)

    Science.gov (United States)

    Burns, B. A.; Cavalieri, D. J.; Keller, M. R.

    1986-01-01

    Active and passive microwave data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait (MIZEX 84) are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) data to those obtained from passive microwave imagery at several frequencies. The comparison is carried out to evaluate SAR performance against the more established passive microwave technique, and to investigate discrepancies in terms of how ice surface conditions, imaging geometry, and choice of algorithm parameters affect each sensor. Active and passive estimates of ice concentration agree on average to within 12%. Estimates from the multichannel passive microwave data show best agreement with the SAR estimates because the multichannel algorithm effectively accounts for the range in ice floe brightness temperatures observed in the MIZ.

  17. Genetic Algorithm Tuning of PID Controller in Smith Predictor for Glucose Concentration Control

    OpenAIRE

    Tsonyo Slavov; Olympia Roeva

    2011-01-01

    This paper focuses on design of a glucose concentration control system based on nonlinear model plant of E. coli MC4110 fed-batch cultivation process. Due to significant time delay in real time glucose concentration measurement, a correction is proposed in glucose concentration measurement and a Smith predictor (SP) control structure based on universal PID controller is designed. To reduce the influence of model error in SP structure the estimate of measured glucose concentration is used. For...

  18. Long-Term Precipitation Analysis and Estimation of Precipitation Concentration Index Using Three Support Vector Machine Methods

    Directory of Open Access Journals (Sweden)

    Milan Gocic

    2016-01-01

    Full Text Available The monthly precipitation data from 29 stations in Serbia during the period of 1946–2012 were considered. Precipitation trends were calculated using linear regression method. Three CLINO periods (1961–1990, 1971–2000, and 1981–2010 in three subregions were analysed. The CLINO 1981–2010 period had a significant increasing trend. Spatial pattern of the precipitation concentration index (PCI was presented. For the purpose of PCI prediction, three Support Vector Machine (SVM models, namely, SVM coupled with the discrete wavelet transform (SVM-Wavelet, the firefly algorithm (SVM-FFA, and using the radial basis function (SVM-RBF, were developed and used. The estimation and prediction results of these models were compared with each other using three statistical indicators, that is, root mean square error, coefficient of determination, and coefficient of efficiency. The experimental results showed that an improvement in predictive accuracy and capability of generalization can be achieved by the SVM-Wavelet approach. Moreover, the results indicated the proposed SVM-Wavelet model can adequately predict the PCI.

  19. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  20. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs