WorldWideScience

Sample records for model structure error

  1. Structure and asymptotic theory for nonlinear models with GARCH errors

    Directory of Open Access Journals (Sweden)

    Felix Chan

    2015-01-01

    Full Text Available Nonlinear time series models, especially those with regime-switching and/or conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with little theoretical or statistical analysis associated with the structure of the processes or the associated asymptotic theory. In this paper, we derive sufficient conditions for strict stationarity and ergodicity of three different specifications of the first-order smooth transition autoregressions with heteroskedastic errors. This is essential, among other reasons, to establish the conditions under which the traditional LM linearity tests based on Taylor expansions are valid. We also provide sufficient conditions for consistency and asymptotic normality of the Quasi-Maximum Likelihood Estimator for a general nonlinear conditional mean model with first-order GARCH errors.

  2. Quantifying model structural error: Efficient Bayesian calibration of a regional groundwater flow model using surrogates and a data-driven error model

    Science.gov (United States)

    Xu, Tianfang; Valocchi, Albert J.; Ye, Ming; Liang, Feng

    2017-05-01

    Groundwater model structural error is ubiquitous, due to simplification and/or misrepresentation of real aquifer systems. During model calibration, the basic hydrogeological parameters may be adjusted to compensate for structural error. This may result in biased predictions when such calibrated models are used to forecast aquifer responses to new forcing. We investigate the impact of model structural error on calibration and prediction of a real-world groundwater flow model, using a Bayesian method with a data-driven error model to explicitly account for model structural error. The error-explicit Bayesian method jointly infers model parameters and structural error and thereby reduces parameter compensation. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models (based on machine learning techniques) as a substitute for the computationally expensive groundwater model. We demonstrate that with explicit treatment of model structural error, the Bayesian method yields parameter posterior distributions that are substantially different from those derived using classical Bayesian calibration that does not account for model structural error. We also found that the error-explicit Bayesian method gives significantly more accurate prediction along with reasonable credible intervals. Finally, through variance decomposition, we provide a comprehensive assessment of prediction uncertainty contributed from parameter, model structure, and measurement uncertainty. The results suggest that the error-explicit Bayesian approach provides a solution to real-world modeling applications for which data support the presence of model structural error, yet model deficiency cannot be specifically identified or corrected.

  3. Structure and Asymptotic theory for Nonlinear Models with GARCH Errors

    NARCIS (Netherlands)

    F. Chan (Felix); M.J. McAleer (Michael); M.C. Medeiros (Marcelo)

    2011-01-01

    textabstractNonlinear time series models, especially those with regime-switching and conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with li

  4. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    -response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two......, a disadvantage of current semi-compensatory models versus compensatory models is their behaviorally non-realistic assumption of an independent error structure. This study proposes a novel semi-compensatory model incorporating a flexible error structure. Specifically, the model represents a sequence...

  5. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  6. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... indicator simulation, we produce many realizations of model structure that are consistent with observed datasets and prior knowledge. Given estimates of model structural uncertainty, we incorporate hydrologic observations to evaluate the errors in hydrologic parameter or prediction errors that occur when...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...

  7. Identifiability of Gaussian Structural Equation Models with Same Error Variances

    CERN Document Server

    Peters, Jonas

    2012-01-01

    We consider structural equation models (SEMs) in which variables can be written as a function of their parents and noise terms (the latter are assumed to be jointly independent). Corresponding to each SEM, there is a directed acyclic graph (DAG) G_0 describing the relationships between the variables. In Gaussian SEMs with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes (assuming faithfulness). It has been shown, however, that this constitutes an exceptional case. In the case of linear functions and non-Gaussian noise, the DAG becomes identifiable. Apart from few exceptions the same is true for non-linear functions and arbitrarily distributed additive noise. In this work, we prove identifiability for a third modification: if we require all noise variables to have the same variances, again, the DAG can be recovered from the joint Gaussian distribution. Our result can be applied to the problem of causal inference. If the data follow a Gaussian SEM w...

  8. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  9. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2014-09-30

    suitable coarse-grained variables is a necessary but not sufficient condition for this predictive skill, and 4 elementary examples are given here...issue in contemporary applied mathematics is the development of simpler dynamical models for a reduced subset of variables in complex high...In this article I developed a new practical framework of creating a stochastically parameterized reduced model for slow variables of complex

  10. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    Science.gov (United States)

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  11. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of

  12. Framework for Understanding Structural Errors (FUSE): a modular framework to diagnose differences between hydrological models

    Science.gov (United States)

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN-90 source code for FUSE is available upon request from the lead author.

  13. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  14. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  15. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    Science.gov (United States)

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  16. Bayesian statistical modeling of spatially correlated error structure in atmospheric tracer inverse analysis

    Directory of Open Access Journals (Sweden)

    C. Mukherjee

    2011-01-01

    Full Text Available Inverse modeling applications in atmospheric chemistry are increasingly addressing the challenging statistical issues of data synthesis by adopting refined statistical analysis methods. This paper advances this line of research by addressing several central questions in inverse modeling, focusing specifically on Bayesian statistical computation. Motivated by problems of refining bottom-up estimates of source/sink fluxes of trace gas and aerosols based on increasingly high-resolution satellite retrievals of atmospheric chemical concentrations, we address head-on the need for integrating formal spatial statistical methods of residual error structure in global scale inversion models. We do this using analytically and computationally tractable spatial statistical models, know as conditional autoregressive spatial models, as components of a global inversion framework. We develop Markov chain Monte Carlo methods to explore and fit these spatial structures in an overall statistical framework that simultaneously estimates source fluxes. Additional aspects of the study extend the statistical framework to utilize priors in a more physically realistic manner, and to formally address and deal with missing data in satellite retrievals. We demonstrate the analysis in the context of inferring carbon monoxide (CO sources constrained by satellite retrievals of column CO from the Measurement of Pollution in the Troposphere (MOPITT instrument on the TERRA satellite, paying special attention to evaluating performance of the inverse approach using various statistical diagnostic metrics. This is developed using synthetic data generated to resemble MOPITT data to define a~proof-of-concept and model assessment, and then in analysis of real MOPITT data.

  17. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  18. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

    2013-07-23

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  19. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-09-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  20. Doubly-Latent Models of School Contextual Effects: Integrating Multilevel and Structural Equation Approaches to Control Measurement and Sampling Error

    Science.gov (United States)

    Marsh, Herbert W.; Ludtke, Oliver; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthen, Bengt; Nagengast, Benjamin

    2009-01-01

    This article is a methodological-substantive synergy. Methodologically, we demonstrate latent-variable contextual models that integrate structural equation models (with multiple indicators) and multilevel models. These models simultaneously control for and unconfound measurement error due to sampling of items at the individual (L1) and group (L2)…

  1. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  2. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    CERN Document Server

    Wilczynska, Michael R; King, Julian A; Murphy, Michael T; Bainbridge, Matthew B; Flambaum, Victor V

    2015-01-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of $0.4 \\leq z_{abs} \\leq 2.3$ observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of $\\Delta\\alpha/\\alpha=\\left(0.22\\pm0.23\\right)\\times10^{-5}$, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of $\\Delta\\alpha/\\alpha$ measurements, th...

  3. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    Science.gov (United States)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  4. Doubly-Latent Models of School Contextual Effects: Integrating Multilevel and Structural Equation Approaches to Control Measurement and Sampling Error.

    Science.gov (United States)

    Marsh, Herbert W; Lüdtke, Oliver; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthén, Bengt; Nagengast, Benjamin

    2009-11-30

    This article is a methodological-substantive synergy. Methodologically, we demonstrate latent-variable contextual models that integrate structural equation models (with multiple indicators) and multilevel models. These models simultaneously control for and unconfound measurement error due to sampling of items at the individual (L1) and group (L2) levels and sampling error due the sampling of persons in the aggregation of L1 characteristics to form L2 constructs. We consider a set of models that are latent or manifest in relation to sampling items (measurement error) and sampling of persons (sampling error) and discuss when different models might be most useful. We demonstrate the flexibility of these 4 core models by extending them to include random slopes, latent (single-level or cross-level) interactions, and latent quadratic effects. Substantively we use these models to test the big-fish-little-pond effect (BFLPE), showing that individual student levels of academic self-concept (L1-ASC) are positively associated with individual level achievement (L1-ACH) and negatively associated with school-average achievement (L2-ACH)-a finding with important policy implications for the way schools are structured. Extending tests of the BFLPE in new directions, we show that the nonlinear effects of the L1-ACH (a latent quadratic effect) and the interaction between gender and L1-ACH (an L1 × L1 latent interaction) are not significant. Although random-slope models show no significant school-to-school variation in relations between L1-ACH and L1-ASC, the negative effects of L2-ACH (the BFLPE) do vary somewhat with individual L1-ACH. We conclude with implications for diverse applications of the set of latent contextual models, including recommendations about their implementation, effect size estimates (and confidence intervals) appropriate to multilevel models, and directions for further research in contextual effect analysis.

  5. Error handling strategies in multiphase inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  6. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  7. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  8. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    2009-01-01

    , and utility-based choice accommodating alternatively nested substitution patterns across the alternatives and random taste variation across the population. The proposed model is applied to off-campus rental apartment choice of students. Results show (i) the estimated model for a universal realm of 200...

  9. Development and estimation of a semi-compensatory model with a flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    2012-01-01

    In decisions involving many alternatives, such as residential choice, individuals conduct a two-stage decision process, consisting of eliminating non-viable alternatives and choice from the retained choice set. In light of the potential of semi-compensatory discrete choice models to mathematically...

  10. Detecting Serial Correlation in the Error Structure of a Cross-Lagged Panel Model.

    Science.gov (United States)

    1988-02-02

    in turn, was motivated, by the seminal work of Paul Lazarsfeld (1948) on the analysis of panel studies involving discrete variables. The most...Panel Analysis: Models of Quantitative Change, New York: Academic Press. Lazarsfeld , P.F. (1948) "The Use of Panels in Social Research", in Continuities...in the Language of Social Research (P.F. Lazarsfeld , et.al. eds.) New York: Free Press, 330-337. Malinvaud, E. (1980) Statistical Methods of

  11. Analysis of modeling errors in system identification

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.

  12. Error Models of the Analog to Digital Converters

    OpenAIRE

    Michaeli Linus; Šaliga Ján

    2014-01-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by t...

  13. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  14. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  15. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    Directory of Open Access Journals (Sweden)

    Andrew D Lowther

    Full Text Available Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  16. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    Science.gov (United States)

    Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  17. SnugDock: Paratope Structural Optimization during Antibody-Antigen Docking Compensates for Errors in Antibody Homology Models

    Science.gov (United States)

    Sircar, Aroop; Gray, Jeffrey J.

    2010-01-01

    High resolution structures of antibody-antigen complexes are useful for analyzing the binding interface and to make rational choices for antibody engineering. When a crystallographic structure of a complex is unavailable, the structure must be predicted using computational tools. In this work, we illustrate a novel approach, named SnugDock, to predict high-resolution antibody-antigen complex structures by simultaneously structurally optimizing the antibody-antigen rigid-body positions, the relative orientation of the antibody light and heavy chains, and the conformations of the six complementarity determining region loops. This approach is especially useful when the crystal structure of the antibody is not available, requiring allowances for inaccuracies in an antibody homology model which would otherwise frustrate rigid-backbone docking predictions. Local docking using SnugDock with the lowest-energy RosettaAntibody homology model produced more accurate predictions than standard rigid-body docking. SnugDock can be combined with ensemble docking to mimic conformer selection and induced fit resulting in increased sampling of diverse antibody conformations. The combined algorithm produced four medium (Critical Assessment of PRediction of Interactions-CAPRI rating) and seven acceptable lowest-interface-energy predictions in a test set of fifteen complexes. Structural analysis shows that diverse paratope conformations are sampled, but docked paratope backbones are not necessarily closer to the crystal structure conformations than the starting homology models. The accuracy of SnugDock predictions suggests a new genre of general docking algorithms with flexible binding interfaces targeted towards making homology models useful for further high-resolution predictions. PMID:20098500

  18. Error Models of the Analog to Digital Converters

    Science.gov (United States)

    Michaeli, Linus; Šaliga, Ján

    2014-04-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  19. Error Models of the Analog to Digital Converters

    Directory of Open Access Journals (Sweden)

    Michaeli Linus

    2014-04-01

    Full Text Available Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  20. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  1. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  2. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  3. Effect Of Oceanic Lithosphere Age Errors On Model Discrimination

    Science.gov (United States)

    DeLaughter, J. E.

    2016-12-01

    The thermal structure of the oceanic lithosphere is the subject of a long-standing controversy. Because the thermal structure varies with age, it governs properties such as heat flow, density, and bathymetry with important implications for plate tectonics. Though bathymetry, geoid, and heat flow for young (geoid, and heat flow data to an inverse model to determine lithospheric structure details. Though inverse models usually include the effect of errors in bathymetry, heat flow, and geoid, they rarely examine the effects of errors in age. This may have the effect of introducing subtle biases into inverse models of the oceanic lithosphere. Because the inverse problem for thermal structure is both ill-posed and ill-conditioned, these overlooked errors may have a greater effect than expected. The problem is further complicated by the non-uniform distribution of age and errors in age estimates; for example, only 30% of the oceanic lithosphere is older than 80 MY and less than 3% is older than 150 MY. To determine the potential strength of such biases, I have used the age and error maps of Mueller et al (2008) to forward model the bathymetry for half space and GDH1 plate models. For ages less than 20 MY, both models give similar results. The errors induced by uncertainty in age are relatively large and suggest that when possible young lithosphere should be excluded when examining the lithospheric thermal model. As expected, GDH1 bathymetry converges asymptotically on the theoretical result for error-free data for older data. The resulting uncertainty is nearly as large as that introduced by errors in the other parameters; in the absence of other errors, the models can only be distinguished for ages greater than 80 MY. These results suggest that the problem should be approached with the minimum possible number of variables. For example, examining the direct relationship of geoid to bathymetry or heat flow instead of their relationship to age should reduce uncertainties

  4. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  5. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  6. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  7. Generalization error bounds for stationary autoregressive models

    CERN Document Server

    McDonald, Daniel J; Schervish, Mark

    2011-01-01

    We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that the stationarity assumption alone lets us treat the estimation of AR models as a regularized kernel regression without the need to further regularize the model arbitrarily. We thereby bound the Rademacher complexity of AR models and apply existing Rademacher complexity results to characterize the predictive risk of AR models. We demonstrate our methods by predicting interest rate movements.

  8. Spatial Error Metrics for Oceanographic Model Verification

    Science.gov (United States)

    2012-02-01

    quantitatively and qualitatively for this oceano - graphic data and successfully separates the model error into displacement and intensity components. This... oceano - graphic models as well, though one would likely need to make special modifications to handle the often-used nonuniform spacing between depth layers

  9. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  10. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  11. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. A probabilistic model for reducing medication errors.

    Directory of Open Access Journals (Sweden)

    Phung Anh Nguyen

    Full Text Available BACKGROUND: Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. METHODS AND FINDINGS: Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan's National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM and Medication-Medication (MM associations were computed by their co-occurrence and associations' strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. CONCLUSIONS: We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients' safety and the overall quality of care.

  13. Which forcing data errors matter most when modeling seasonal snowpacks?

    Science.gov (United States)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly

  14. Modeling Approach of Regression Orthogonal Experiment Design for Thermal Error Compensation of CNC Turning Center

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The thermal induced errors can account for as much as 70% of the dimensional errors on a workpiece. Accurate modeling of errors is an essential part of error compensation. Base on analyzing the existing approaches of the thermal error modeling for machine tools, a new approach of regression orthogonal design is proposed, which combines the statistic theory with machine structures, surrounding condition, engineering judgements, and experience in modeling. A whole computation and analysis procedure is given. ...

  15. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    Science.gov (United States)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  16. Regression Model With Elliptically Contoured Errors

    CERN Document Server

    Arashi, M; Tabatabaey, S M M

    2012-01-01

    For the regression model where the errors follow the elliptically contoured distribution (ECD), we consider the least squares (LS), restricted LS (RLS), preliminary test (PT), Stein-type shrinkage (S) and positive-rule shrinkage (PRS) estimators for the regression parameters. We compare the quadratic risks of the estimators to determine the relative dominance properties of the five estimators.

  17. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  18. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  19. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  20. Comparative study and error analysis of digital elevation model interpolations

    Institute of Scientific and Technical Information of China (English)

    CHEN Ji-long; WU Wei; LIU Hong-bin

    2008-01-01

    Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.

  1. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  2. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  3. Error propagation in energetic carrying capacity models

    Science.gov (United States)

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  4. A Probabilistic Model for Reducing Medication Errors

    Science.gov (United States)

    Nguyen, Phung Anh; Syed-Abdul, Shabbir; Iqbal, Usman; Hsu, Min-Huei; Huang, Chen-Ling; Li, Hsien-Chang; Clinciu, Daniel Livius; Jian, Wen-Shan; Li, Yu-Chuan Jack

    2013-01-01

    Background Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. Methods and Finding(s) Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan’s National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM) and Medication-Medication (MM) associations were computed by their co-occurrence and associations’ strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. Conclusions We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients’ safety and the overall quality of care. PMID:24312659

  5. Biomedical model fitting and error analysis.

    Science.gov (United States)

    Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

    2011-09-20

    This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

  6. Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.

  7. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  8. Error structure as a function of substrate and inhibitor concentration in enzyme kinetic experiments.

    Science.gov (United States)

    Mannervik, B; Jakobson, I; Warholm, M

    1986-01-01

    Optimal design of experiments as well as proper analysis of data are dependent on knowledge of the experimental error. A detailed analysis of the error structure of kinetic data obtained with acetylcholinesterase showed conclusively that the classical assumptions of constant absolute or constant relative error are inadequate for the dependent variable (velocity). The best mathematical models for the experimental error involved the substrate and inhibitor concentrations and reflected the rate law for the initial velocity. Data obtained with other enzymes displayed similar relationships between experimental error and the independent variables. The new empirical error functions were shown superior to previously used models when utilized in weighted non-linear-regression analysis of kinetic data. The results suggest that, in the spectrophotometric assays used in the present study, the observed experimental variance is primarily due to errors in determination of the concentrations of substrate and inhibitor and not to error in measuring the velocity. PMID:3753447

  9. Structured error recovery for code-word-stabilized quantum codes

    Science.gov (United States)

    Li, Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-01

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  10. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  11. Hybrid Models for Trajectory Error Modelling in Urban Environments

    Science.gov (United States)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  12. Structure determines medication errors in nursing units: a mechanistic approach.

    Science.gov (United States)

    Hung, Chang-Chiao; Lee, Bih-O; Tsai, Shu-Ling; Tseng, Yun Shan; Chang, Chia-Hao

    2015-03-01

    Medication errors have long been considered critical in global health care systems. However, few studies have been conducted to explore the effects of nursing unit structure on medication errors. The purpose of this study, therefore, was to determine the effects of structural factors on medication errors in nursing units. A total of 977 staff nurses and 62 head nurses participated in this cross-sectional design study. The findings show that professional autonomy (β = .53, t = 6.03, p nursing experts (β = .52, t = 5.99, p medication error rates. This study shows that the structural factors influence medication administration and the mechanistic approach is specifically in relation of low medication error rates. The author suggests that head nurses should consider strategies that require adjustments to unit control mechanisms.

  13. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  14. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  15. Error field and magnetic diagnostic modeling for W7-X

    Energy Technology Data Exchange (ETDEWEB)

    Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  16. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  17. Cosmic error and the statistics of large scale structure

    CERN Document Server

    Szapudi, I; Szapudi, Istvan; Colombi, Stephane

    1995-01-01

    We examine the errors on counts in cells extracted from galaxy surveys. The measurement error, related to the finite number of sampling cells, is disentangled from the ``cosmic error'', due to the finiteness of the survey. Using the hierarchical model and assuming locally Poisson behavior, we identified three contributions to the cosmic error: The finite volume effect is proportional to the average of the two-point correlation function over the whole survey. It accounts for possible fluctuations of the density field at scales larger than the sample size. The edge effect is related to the geometry of the survey. It accounts for the fact that objects near the boundary carry less statistical weight than those further away from it. The discreteness effect is due to the fact that the underlying smooth random field is sampled with finite number of objects. This is the ``shot noise'' error. Measurements of errors in artificial hierarchical samples showed excellent agreement with our predictions. The probability dist...

  18. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    Science.gov (United States)

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  19. Cognitive modelling of pilot errors and error recovery in flight management tasks

    NARCIS (Netherlands)

    Lüdtke, A.; Osterloh, J.P.; Mioch, T.; Rister, F.; Looije, R.

    2009-01-01

    This paper presents a cognitive modelling approach to predict pilot errors and error recovery during the interaction with aircraft cockpit systems. The model allows execution of flight procedures in a virtual simulation environment and production of simulation traces. We present traces for the inter

  20. Programming errors in traversal programs over structured data

    CERN Document Server

    Laemmel, Ralf; Kaiser, Markus

    2012-01-01

    Traversal strategies \\'a la Stratego (also \\'a la Strafunski and 'Scrap Your Boilerplate') provide an exceptionally versatile and uniform means of querying and transforming deeply nested and heterogeneously structured data including terms in functional programming and rewriting, objects in OO programming, and XML documents in XML programming. However, the resulting traversal programs are prone to programming errors. We are specifically concerned with errors that go beyond conservative type errors; examples we examine include divergent traversals, prematurely terminated traversals, and traversals with dead code. Based on an inventory of possible programming errors we explore options of static typing and static analysis so that some categories of errors can be avoided. This exploration generates suggestions for improvements to strategy libraries as well as their underlying programming languages. Haskell is used for illustrations and specifications with sufficient explanations to make the presentation comprehens...

  1. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  2. Analysis of offset error for segmented micro-structure optical element based on optical diffraction theory

    Science.gov (United States)

    Su, Jinyan; Wu, Shibin; Yang, Wei; Wang, Lihua

    2016-10-01

    Micro-structure optical elements are gradually applied in modern optical system due to their characters such as light weight, replicating easily, high diffraction efficiency and many design variables. Fresnel lens is a typical micro-structure optical element. So in this paper we take Fresnel lens as base of research. Analytic solution to the Point Spread Function (PSF) of the segmented Fresnel lens is derived based on the theory of optical diffraction, and the mathematical simulation model is established. Then we take segmented Fresnel lens with 5 pieces of sub-mirror as an example. In order to analyze the influence of different offset errors on the system's far-field image quality, we obtain the analytic solution to PSF of the system under the condition of different offset errors by using Fourier-transform. The result shows the translation error along XYZ axis and tilt error around XY axis will introduce phase errors which affect the imaging quality of system. The translation errors along XYZ axis constitute linear relationship with corresponding phase errors and the tilt errors around XY axis constitute trigonometric function relationship with corresponding phase errors. In addition, the standard deviations of translation errors along XY axis constitute quadratic nonlinear relationship with system's Strehl ratio. Finally, the tolerances of different offset errors are obtained according to Strehl Criteria.

  3. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  4. Approximation error in PDE-based modelling of vehicular platoons

    Science.gov (United States)

    Hao, He; Barooah, Prabir

    2012-08-01

    We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.

  5. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  6. Investigating Experimental Effects within the Framework of Structural Equation Modeling: An Example with Effects on Both Error Scores and Reaction Times

    Science.gov (United States)

    Schweizer, Karl

    2008-01-01

    Structural equation modeling provides the framework for investigating experimental effects on the basis of variances and covariances in repeated measurements. A special type of confirmatory factor analysis as part of this framework enables the appropriate representation of the experimental effect and the separation of experimental and…

  7. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  8. Probe Error Modeling Research Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    Wu Huaiqiang; Xing Zilong; Zhang Jian; Yan Yan

    2015-01-01

    Probe calibration is carried out under specific conditions; most of the error caused by the change of speed parameter has not been corrected. In order to reduce the measuring error influence on measurement accuracy, this article analyzes the relationship between speed parameter and probe error, and use Bayesian network to establish the model of probe error. Model takes account of prior knowledge and sample data, with the updating of data, which can reflect the change of the errors of the probe and constantly revised modeling results.

  9. PRESAGE: Protecting Structured Address Generation against Soft Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    2016-12-28

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.

  10. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    control of vibratory power transmission into structures. This is demonstrated by computer simulations using a theoretical model of a beam structure which is driven by one primary source and two control sources. These simulations reveal the influence of residual errors on power measurements......, and the limitations imposed in active control of structural vibration based upon a strategy of power minimisation....

  11. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  12. An error assessment of the kriging based approximation model using a mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)

    2006-08-15

    A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.

  13. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  14. Error Model of Curves in GIS and Digitization Experiment

    Institute of Scientific and Technical Information of China (English)

    GUO Tongde; WANG Jiayao; WANG Guangxia

    2006-01-01

    A stochastic error process of curves is proposed as the error model to describe the errors of curves in GIS. In terms of the stochastic process, four characteristics concerning the local error of curves, namely, mean error function, standard error function, absolute error function, and the correlation function of errors , are put forward. The total error of a curve is expressed by a mean square integral of the stochastic error process. The probabilistic meanings and geometric meanings of the characteristics mentioned above are also discussed. A scan digitization experiment is designed to check the efficiency of the model. In the experiment, a piece of contour line is digitized for more than 100 times and lots of sample functions are derived from the experiment. Finally, all the error characteristics are estimated on the basis of sample functions. The experiment results show that the systematic error in digitized map data is not negligible, and the errors of points on curves are chiefly dependent on the curvature and the concavity of the curves.

  15. Error rate information in attention allocation pilot models

    Science.gov (United States)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  16. Using Laser Scanners to Augment the Systematic Error Pointing Model

    Science.gov (United States)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  17. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  18. Tone, Syllable Structure and Interlanguage Phonology: Chinese Learners' Stress Errors.

    Science.gov (United States)

    Juffs, Alan

    1990-01-01

    Analysis of Chinese undergraduates' oral production of English lexical items they had already been exposed to found that factors affecting both placement and phonetic stress errors involved native language patterns, syllable structure, and the lexical item's position in the rhythmic sequence. (45 references) (CB)

  19. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  20. The effect of uncertainty and systematic errors in hydrological modelling

    Science.gov (United States)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  1. The effect of model errors in variational assimilation

    Science.gov (United States)

    Wergen, Werner

    1992-08-01

    A linearized, one-dimensional shallow water model is used to investigate the effect of model errors in four-dimensional variational assimilation. A suitable initialization scheme for variational assimilation is proposed. Introducing deliberate phase speed errors in the model, the results from variational assimilation are compared to standard analysis/forecast cycle experiments. While the latter draws to the data and reflects the model errors only in the datavoid areas, variational assimilation with the model used as strong constraint is shown to distribute the model errors over the entire analysis domain. The implications for verification and diagnostics are discussed. Temporal weighting of the observations can reduce the errors towards the end of the assimilation period, but may deteriorate the subsequent forecasts. An extension to variational assimilation is proposed, which seeks not only to determine the initial state from the observations but also some of the tunable parameters of the model. The potentional usefulness of this approach for parameterization studies and for a separation of forecast errors into model- and analysis errors is discussed. Finally, variational assimilations with the model used as weak constraint are presented. While showing a good performance in the assimilation, forecasts can suffer severely if the extra term in the equations up to which the model is enforced are unable to compensate for the real model error. In the discussion, an overall appraisal of both assimilation methods is given.

  2. Propagation error minimization method for multiple structural displacement monitoring system

    Science.gov (United States)

    Jeon, Haemin; Shin, Jae-Uk; Myung, Hyun

    2013-04-01

    In the previous study, a visually servoed paired structured light system (ViSP) which is composed of two sides facing each other, each with one or two lasers, a 2-DOF manipulator, a camera, and a screen has been proposed. The lasers project their parallel beams to the screen on the opposite side and 6-DOF relative displacement between two sides is estimated by calculating positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive civil structures such as long-span bridges or high-rise buildings, the whole area should be divided into multiple partitions and each ViSP module is placed in each partition in a cascaded manner. In other words, the movement of the entire structure can be monitored by multiplying the estimated displacements from multiple ViSP modules. In the multiplication, however, there is a major problem that the displacement estimation error is propagated throughout the multiple modules. To solve the problem, propagation error minimization method (PEMM) which uses Newton-Raphson formulation inspired by the error back-propagation algorithm is proposed. In this method, a propagation error at the last module is calculated and then the estimated displacement from ViSP at each partition is updated in reverse order by using the proposed PEMM that minimizes the propagation error. To verify the performance of the proposed method, various simulations and experimental tests have been performed. The results show that the propagation error is significantly reduced after applying PEMM.

  3. NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.

    Science.gov (United States)

    Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen

    2017-04-01

    We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. The Detection of Structural Deformation Errors in Attitude Determination

    Institute of Scientific and Technical Information of China (English)

    M. J. Moore; C. Rizos; J. Wang

    2003-01-01

    In the determination of the attitude parameters from a multi-antenna GPS array, one of the major assumptions is that the body frame is rigid at all times. If this assumption is not true then the derived attitude parameters will be in error. It is well known that in airborne platforms the wings often experience some displacement during flight, especially during periods of initializing maneouvres, such as taking off, landing,and banking. Often it is at these points in time that it is most critical to have the most precise attitude parameters.There are a number of techniques available for the detection of modeling errors.The CUSUM algorithm has successfully been implemented in the past to detect small persistent changes. In this paper the authors investigate different methods of generating the residuals, to be tested by the CUSUM algorithm, in an effort to determine which technique is best suited for the detection of structural deformation of an airborne platform. The methods investigated include monitoring the mean of the residuals generated from the difference between the known body frame coordinates, and those calculated from the derived attitude parameters. The generated residuals are then passed to a CUSUM algorithm to detect any small persistent changes. An alternative method involves transforming the generated residuals into the frequency domain through the use of the Fast Fourier Transform. The CUSUM algorithm is then used to detect any frequency changes. The final technique investigated involves transforming the generated residuals using the Haar wavelet. The wavelet coefficients are then monitored by the CUSUM algorithm in order to detect any significant change to the rigidity of the body frame.Detecting structural deformation, and quantifying the degree of deformation, during flight will ensure that these effects can be removed from the system, thus ensuring the most precise and reliable attitude parameter solutions. This paper, through a series

  5. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  6. Stress Recovery and Error Estimation for Shell Structures

    Science.gov (United States)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  7. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  8. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  9. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling

  10. Error estimates for the Skyrme-Hartree-Fock model

    CERN Document Server

    Erler, J

    2014-01-01

    There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

  11. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    CERN Document Server

    Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq

    2010-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...

  12. Bayesian modeling growth curves for quail assuming skewness in errors

    Directory of Open Access Journals (Sweden)

    Robson Marcelo Rossi

    2014-06-01

    Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.

  13. Correcting biased observation model error in data assimilation

    CERN Document Server

    Harlim, John

    2016-01-01

    While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...

  14. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  15. Use of the breeding technique to estimate the structure of the analysis 'errors of the day'

    Directory of Open Access Journals (Sweden)

    M. Corazza

    2003-01-01

    Full Text Available A 3D-variational data assimilation scheme for a quasi-geostrophic channel model (Morss, 1998 is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. Case studies using different observational densities are considered to compare the evolution of the Bred Vectors to the spatial structure of the background error. In addition, the bred vector dimension (BV-dimension, defined by Patil et al. (2001 is applied to the bred vectors. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms (enstrophy and streamfunction considered in this paper. When 10 surrogate bred vectors (corresponding to different days from that of the background error are used to describe the local patterns of the background error, the explained variance is quite high, about 85-88%, indicating that the statistical average properties of the bred vectors represent well those of the background error. However, a subspace of 10 bred vectors corresponding to the time of the background error increased the percentage of explained variance to 96-98%, with the largest percentage when the background errors are large. These results suggest that a statistical basis of bred vectors collected over time can be used to create an effective constant background error covariance for data assimilation with 3D-Var. Including the "errors of the day" through the use of bred vectors corresponding to the background forecast time can bring an additional significant improvement.

  16. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  17. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  18. A Morphographemic Model for Error Correction in Nonconcatenative Strings

    CERN Document Server

    Bowden, T; Bowden, Tanya; Kiraz, George Anton

    1995-01-01

    This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.

  19. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  20. Filtering multiscale dynamical systems in the presence of model error

    CERN Document Server

    Harlim, John

    2013-01-01

    In this review article, we report two important competing data assimilation schemes that were developed in the past 20 years, discuss the current methods that are operationally used in weather forecasting applications, and point out one major challenge in data assimilation community: "utilize these existing schemes in the presence of model error". The aim of this paper is to provide theoretical guidelines to mitigate model error in practical applications of filtering multiscale dynamical systems with reduced models. This is a prototypical situation in many applications due to limited ability to resolve the smaller scale processes as well as the difficulty to model the interaction across scales. We present simple examples to point out the importance of accounting for model error when the separation of scales are not apparent. These examples also elucidate the necessity of treating model error as a stochastic process in a nontrivial fashion for optimal filtering, in the sense that the mean and covariance estima...

  1. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    Science.gov (United States)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  2. ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    CUI Hengjian

    2005-01-01

    This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.

  3. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  4. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    Directory of Open Access Journals (Sweden)

    Zhong Su

    2015-09-01

    Full Text Available A bell-shaped vibratory angular velocity gyro (BVG, inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement.

  5. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  6. A Comprehensive Trainable Error Model for Sung Music Queries

    CERN Document Server

    Birmingham, W P; 10.1613/jair.1334

    2011-01-01

    We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of query-by-humming (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of error or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications.

  7. Fractionally Integrated Models With ARCH Errors

    OpenAIRE

    Hauser, Michael A.; Kunst, Robert M.

    1993-01-01

    Abstract: We introduce ARFIMA-ARCH models which simultaneously incorporate fractional differencing and conditional heteroskedasticity. We develop the likelihood function and a numerical estimation procedure for this model class. Two ARCH models - Engle- and Weiss-type - are explicitly treated and stationarity conditions are derived. Finite-sample properties of the estimation procedure are explored by Monte Carlo simulation. An application to the Standard & Poor 500 Index indicates existence o...

  8. Effect of GPS errors on Emission model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...

  9. Application of structural equation models for evaluating epidemiological data and for calculation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, P.

    2003-01-01

    observational epidemiology; measurement error; multiple endpoints structural equation models; safety standard......observational epidemiology; measurement error; multiple endpoints structural equation models; safety standard...

  10. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  11. Reducing RANS Model Error Using Random Forest

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  12. Calibration of parallel kinematics machine using generalized distance error model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.

  13. Quantifying uncertainty in climatological fields from GPS radio occultation: an empirical-analytical error model

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards

  14. A novel data-driven approach to model error estimation in Data Assimilation

    Science.gov (United States)

    Pathiraja, Sahani; Moradkhani, Hamid; Marshall, Lucy; Sharma, Ashish

    2016-04-01

    Error characterisation is a fundamental component of Data Assimilation (DA) studies. Effectively describing model error statistics has been a challenging area, with many traditional methods requiring some level of subjectivity (for instance in defining the error covariance structure). Recent advances have focused on removing the need for tuning of error parameters, although there are still some outstanding issues. Many methods focus only on the first and second moments, and rely on assuming multivariate Gaussian statistics. We propose a non-parametric, data-driven framework to estimate the full distributional form of model error, ie. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered, without needing to assign error characteristics/devise stochastic perturbations for individual components of model uncertainty (eg. input, parameter and structural). A training period is used to derive the error distribution of observed variables, conditioned on (potentially hidden) states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The framework is discussed in detail, and an application to a hydrologic case study with hidden states for one-day ahead streamflow prediction is presented. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard tuning approach.

  15. Error detection and rectification in digital terrain models

    Science.gov (United States)

    Hannah, M. J.

    1979-01-01

    Digital terrain models produced by computer correlation of stereo images are likely to contain occasional gross errors in terrain elevation. These errors typically result from having mismatched sub-areas of the two images, a problem which can occur for a variety of image- and terrain-related reasons. Such elevation errors produce undesirable effects when the models are further processed, and should be detected and corrected as early in the processing as possible. Algorithms have been developed to detect and correct errors in digital terrain models. These algorithms focus on the use of constraints on both the allowable slope and the allowable change in slope in local areas around each point. Relaxation-like techniques are employed in the iteration of the detection and correction phases to obtain best results.

  16. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    Science.gov (United States)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  17. Identification of coefficients in platform drift error model

    Institute of Scientific and Technical Information of China (English)

    邓正隆; 徐松艳; 付振宪

    2002-01-01

    The identification of the coefficients in the drift error model of a floated gyro inertial nawgation plat-form was investigated by following the principle of the inertial navigation platform and using gyro and accelerom-eter output models, and a complete platform drift error model was established, with parameters as state varia-bles, thereby establishing the system state equation and observation equation. Since these two equations areboth nonlinear, the Extended Kalman Filter (EKF) was adopted. Then the problem of parameter identificationwas converted into a problem of state estimation. During the simulation, multi-position testing schemes were de-signed to motivated the parameters by gravity acceleration. Using these schemes, twenty-four error coefficientsof three gyros and six error coefficients of three accelerometers were identified, which showed the feasibility ofthis method.

  18. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliability of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity models have been explored......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  19. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    differences. We develop some properties of this type of model and show that in several cases the change from an additive to a multiplicative formulation, maintaining a specification of V, may lead to a large improvement in fit, sometimes larger than that gained from introducing random coefficients in V....

  20. The effect of prediction error correlation on optimal sensor placement in structural dynamics

    Science.gov (United States)

    Papadimitriou, Costas; Lombaert, Geert

    2012-04-01

    The problem of estimating the optimal sensor locations for parameter estimation in structural dynamics is re-visited. The effect of spatially correlated prediction errors on the optimal sensor placement is investigated. The information entropy is used as a performance measure of the sensor configuration. The optimal sensor location is formulated as an optimization problem involving discrete-valued variables, which is solved using computationally efficient sequential sensor placement algorithms. Asymptotic estimates for the information entropy are used to develop useful properties that provide insight into the dependence of the information entropy on the number and location of sensors. A theoretical analysis shows that the spatial correlation length of the prediction errors controls the minimum distance between the sensors and should be taken into account when designing optimal sensor locations with potential sensor distances up to the order of the characteristic length of the dynamic problem considered. Implementation issues for modal identification and structural-related model parameter estimation are addressed. Theoretical and computational developments are illustrated by designing the optimal sensor configurations for a continuous beam model, a discrete chain-like stiffness-mass model and a finite element model of a footbridge in Wetteren (Belgium). Results point out the crucial effect the spatial correlation of the prediction errors have on the design of optimal sensor locations for structural dynamics applications, revealing simultaneously potential inadequacies of spatially uncorrelated prediction errors models.

  1. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  2. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  3. Forecasting the Euro exchange rate using vector error correction models

    NARCIS (Netherlands)

    Aarle, B. van; Bos, M.; Hlouskova, J.

    2000-01-01

    Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex

  4. VQ-based model for binary error process

    Science.gov (United States)

    Csóka, Tibor; Polec, Jaroslav; Csóka, Filip; Kotuliaková, Kvetoslava

    2017-05-01

    A variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (eg in the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson's goodness of fit test of all model variants' output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot's model.

  5. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  6. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  7. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  8. A model for navigational errors in complex environmental fields.

    Science.gov (United States)

    Postlethwaite, Claire M; Walker, Michael M

    2014-12-21

    Many animals are believed to navigate using environmental signals such as light, sound, odours and magnetic fields. However, animals rarely navigate directly to their target location, but instead make a series of navigational errors which are corrected during transit. In previous work, we introduced a model showing that differences between an animal׳s 'cognitive map' of the environmental signals used for navigation and the true nature of these signals caused a systematic pattern in orientation errors when navigation begins. The model successfully predicted the pattern of errors seen in previously collected data from homing pigeons, but underestimated the amplitude of the errors. In this paper, we extend our previous model to include more complicated distortions of the contour lines of the environmental signals. Specifically, we consider the occurrence of critical points in the fields describing the signals. We consider three scenarios and compute orientation errors as parameters are varied in each case. We show that the occurrence of critical points can be associated with large variations in initial orientation errors over a small geographic area. We discuss the implications that these results have on predicting how animals will behave when encountering complex distortions in any environmental signals they use to navigate.

  9. From Systematic Errors to Cosmology Using Large-Scale Structure

    Science.gov (United States)

    Hunterer, Dragan

    We propose to carry out a two-pronged program to significantly improve links between galaxy surveys and constraints on primordial cosmology and fundamental physics. We will first develop the methodology to self-calibrate the survey, that is, determine the large-angle calibration systematics internally from the survey. We will use this information to correct biases that propagate from the largest to smaller angular scales. Our approach for tackling the systematics is very complementary to existing ones, in particular in the sense that it does not assume knowledge of specific systematic maps or templates. It is timely to undertake these analyses, since none of the currently known methods addresses the multiplicative effects of large-angle calibration errors that contaminate the small-scale signal and present one of the most significant sources of error in the large-scale structure. The second part of the proposal is to precisely quantify the statistical and systematic errors in the reconstruction of the Integrated Sachs-Wolfe (ISW) contribution to the cosmic microwave background (CMB) sky map using information from galaxy surveys. Unlike the ISW contributions to CMB power, the ISW map reconstruction has not been studied in detail to date. We will create a nimble plug-and-play pipeline to ascertain how reliably a map from an arbitrary LSS survey can be used to separate the late-time and early-time contributions to CMB anisotropy at large angular scales. We will pay particular attention to partial sky coverage, incomplete redshift information, finite redshift range, and imperfect knowledge of the selection function for the galaxy survey. Our work should serve as the departure point for a variety of implications in cosmology, including the physical origin of the large-angle CMB "anomalies".

  10. A cumulative entropy method for distribution recognition of model error

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  11. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    GIA modeling. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA. The thermal......During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model features relevant to sea level modeling...

  12. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  13. Error modeling and tolerance design of a parallel manipulator with full-circle rotation

    Directory of Open Access Journals (Sweden)

    Yanbing Ni

    2016-05-01

    Full Text Available A method for improving the accuracy of a parallel manipulator with full-circle rotation is systematically investigated in this work via kinematic analysis, error modeling, sensitivity analysis, and tolerance allocation. First, a kinematic analysis of the mechanism is made using the space vector chain method. Using the results as a basis, an error model is formulated considering the main error sources. Position and orientation error-mapping models are established by mathematical transformation of the parallelogram structure characteristics. Second, a sensitivity analysis is performed on the geometric error sources. A global sensitivity evaluation index is proposed to evaluate the contribution of the geometric errors to the accuracy of the end-effector. The analysis results provide a theoretical basis for the allocation of tolerances to the parts of the mechanical design. Finally, based on the results of the sensitivity analysis, the design of the tolerances can be solved as a nonlinearly constrained optimization problem. A genetic algorithm is applied to carry out the allocation of the manufacturing tolerances of the parts. Accordingly, the tolerance ranges for nine kinds of geometrical error sources are obtained. The achievements made in this work can also be applied to other similar parallel mechanisms with full-circle rotation to improve error modeling and design accuracy.

  14. Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy

    Directory of Open Access Journals (Sweden)

    Barbara D. Klein

    1999-01-01

    Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.

  15. A priori discretization error metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  16. Two Error Models for Calibrating SCARA Robots based on the MDH Model

    Directory of Open Access Journals (Sweden)

    Li Xiaolong

    2017-01-01

    Full Text Available This paper describes the process of using two error models for calibrating Selective Compliance Assembly Robot Arm (SCARA robots based on the modified Denavit-Hartenberg(MDH model, with the aim of improving the robot's accuracy. One of the error models is the position error model, which uses robot position errors with respect to an accurate robot base frame built before the measurement commenced. The other model is the distance error model, which uses only the robot moving distance to calculate errors. Because calibration requires the end-effector to be accurately measured, a laser tracker was used to measure the robot position and distance errors. After calibrating the robot and, the end-effector locations were measured again compensating the error models' parameters obtained from the calibration. The finding is that the robot's accuracy improved greatly after compensating the calibrated parameters.

  17. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con

  18. Calibrating Car-Following Model Considering Measurement Errors

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2013-01-01

    Full Text Available Car-following model has important applications in traffic and safety engineering. To enhance the accuracy of model in predicting behavior of individual driver, considerable studies strive to improve the model calibration technologies. However, microscopic car-following models are generally calibrated by using macroscopic traffic data ignoring measurement errors-in-variables that leads to unreliable and erroneous conclusions. This paper aims to develop a technology to calibrate the well-known Van Aerde model. Particularly, the effect of measurement errors-in-variables on the accuracy of estimate is considered. In order to complete calibration of the model using microscopic data, a new parameter estimate method named two-step approach is proposed. The result shows that the modified Van Aerde model to a certain extent is more reliable than the generic model.

  19. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    ) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) the UNEEC method [2,3,7] which takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (e.g. neural networks or k-NN method) (c) the recent DUBRAE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals which first corrects the model residual and then employs an autoregressive statistical model for uncertainty prediction) [5] 2. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. For real complex non-linear functions (models) implemented in software various versions of the Monte Carlo simulation are used: values of parameters or inputs are sampled from the assumed distributions and the model is run multiple times to generate multiple outputs. The data generated by Monte Carlo analysis can be used to build a machine learning model which will be able to make predictions of model uncertainty for the future his method is named MLUE (Machine Learning for Uncertainty Estimation) and is covered in [4,6]. 3. Structural uncertainty stemming from inadequate model structure. The paper discusses the possibilities and experiences of building the models able to forecast (rather than analyse) residual and parametric uncertainty of hydrological models. References [1] Koenker, R., and G. Bassett (1978). Regression quantiles. Econometrica, 46(1), 33- 50, doi:10.2307/1913643. [2] D.L. Shrestha, D.P. Solomatine (2006). Machine learning approaches for estimation of prediction interval for the model output. Neural Networks J., 19(2), 225-235. [3] D.P. Solomatine, D.L. Shrestha (2009). A novel method to estimate model uncertainty using machine learning techniques. Water Resources Res. 45, W00B11. [4] D. L

  20. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    Science.gov (United States)

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  1. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  2. On the Numerical Modelling and Error Compensation for General Gough-Stewart Platform

    Directory of Open Access Journals (Sweden)

    Eusebio Hernandez

    2014-11-01

    Full Text Available Parallel robots are specially designed to perform high-precision tasks. Nevertheless, manufacturing, assembling and control issues can reduce their capacity to perform adequately. Observing the acquired measurement data with high-precision devices - such as laser-based instruments - it is not surprising that the error data follows patterns or have a structure because, in many cases, the greatest error comes from a mechanical bias introduced by manufacturing issues. Even though we cannot determine with certainty where the error comes from, a pattern in the measured data suggests that it is feasible that it can be modelled and corrected - in a significant proportion - by purely software applications, without the need of disassembling or re-manufacturing any component. This work deals with the problem of finding a mathematical model which adequately fits the error data from the legs of a general Gough-Stewart platform. Hence, we obtain an expression which can be subtracted from the control parameters in order to compensate the inherent mechanical error in the legs. The purpose of this article is two-fold: 1 to present numerical results of the beneficial effects of the error compensation in the legs as well as in the end-effector, and 2 to introduce a numerical methodology to find a model for error compensation and to numerically simulate its effects. Numerical, graphical and statistical evidence of the error improvements, according this methodology, is provided.

  3. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  4. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  5. Improved Systematic Pointing Error Model for the DSN Antennas

    Science.gov (United States)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  6. Stochastic modelling and analysis of IMU sensor errors

    Science.gov (United States)

    Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

    2011-12-01

    The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

  7. Application of variance components estimation to calibrate geoid error models.

    Science.gov (United States)

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  8. Influences of observation errors in eddy flux data on inverse model parameter estimation

    Directory of Open Access Journals (Sweden)

    G. Lasslop

    2008-09-01

    Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO2 and H2O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO2 and H2O fluxes were in all cases negligible (R2 below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux

  9. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  10. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  11. Multiplicity Control in Structural Equation Modeling

    Science.gov (United States)

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  12. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  13. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  14. Bayesian network models for error detection in radiotherapy plans.

    Science.gov (United States)

    Kalet, Alan M; Gennari, John H; Ford, Eric C; Phillips, Mark H

    2015-04-07

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network's conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  15. The effects of weighting function errors on spatial filters for structural control. [sensors for vibration damping

    Science.gov (United States)

    Lindner, Douglas K.; Reichard, Karl M.

    1992-01-01

    Distributed-effect sensors, which respond to spatially distributed inputs over a significant gauge length, encompass piezoelectric laminate films, modal-domain optical fiber sensors, and holographic sensors; they can be fabricated with spatially varying sensitivity to a distributed measurand for spatial filtering. Such spatial filters are configurable to extract various structural parameters from distributed measurements that cannot be directly measured by sensors. A modeling is presently conducted for distributed-effect sensors' integration into state-space structural models, noting the effects of fabrication errors on sensor operation.

  16. Vertical mixing in atmospheric tracer transport models: error characterization and propagation

    Directory of Open Access Journals (Sweden)

    C. Gerbig

    2008-02-01

    Full Text Available Imperfect representation of vertical mixing near the surface in atmospheric transport models leads to uncertainties in modelled tracer mixing ratios. When using the atmosphere as an integrator to derive surface-atmosphere exchange from mixing ratio observations made in the atmospheric boundary layer, this uncertainty has to be quantified and taken into account. A comparison between radiosonde-derived mixing heights and mixing heights derived from ECMWF meteorological data during May–June 2005 in Europe revealed random discrepancies of about 40% for the daytime with insignificant bias errors, and much larger values approaching 100% for nocturnal mixing layers with bias errors also exceeding 50%. The Stochastic Time Inverted Lagrangian Transport (STILT model was used to propagate this uncertainty into CO2 mixing ratio uncertainties, accounting for spatial and temporal error covariance. Average values of 3 ppm were found for the 2 month period, indicating that this represents a large fraction of the overall uncertainty. A pseudo data experiment shows that the error propagation with STILT avoids biases in flux retrievals when applied in inversions. The results indicate that flux inversions employing transport models based on current generation meteorological products have misrepresented an important part of the model error structure likely leading to biases in the estimated mean and uncertainties. We strongly recommend including the solution presented in this work: better, higher resolution atmospheric models, a proper description of correlated random errors, and a modification of the overall sampling strategy.

  17. Multivariate DCC-GARCH Model: -With Various Error Distributions

    OpenAIRE

    Orskaug, Elisabeth

    2009-01-01

    In this thesis we have studied the DCC-GARCH model with Gaussian, Student's $t$ and skew Student's t-distributed errors. For a basic understanding of the GARCH model, the univariate GARCH and multivariate GARCH models in general were discussed before the DCC-GARCH model was considered. The Maximum likelihood method is used to estimate the parameters. The estimation of the correctly specified likelihood is difficult, and hence the DCC-model was designed to allow for two stage estim...

  18. Error Assessment in Modeling with Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang

    2013-01-01

    To model a given time series $F(t)$ with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension $D$ is derived from the Hurst exponent $H$ via the relation $D=2-H$, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range $\\langle|F(t+\\tau)-F(t)|\\rangle$ on the time span $\\tau$. For fBms, the error of the rescaled range not only depends on data sampling but also varies with $H$ due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for $0error of $\\langle|F(t+\\tau)-F(t)|\\rangle$. The e...

  19. An Emprical Point Error Model for Tls Derived Point Clouds

    Science.gov (United States)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  20. Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation

    CERN Document Server

    Chen, Victor; de Wolf, Ronald

    2009-01-01

    We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares "don't know." Furthermore, if there is no noise on the data structure, it answers all queries correctly with high probability. Our model is the common generalization of a model proposed recently by de Wolf and the notion of "relaxed locally decodable codes" developed in the PCP literature. We measure the efficiency of a data structure in terms of its length, measured by the number of bits in its representation, and query-answering time, measured by the number of bit-probes to the (possibly corrupted) representation. In this work, we study two data structure problems: membership and polynomial evaluation. We show that these two problems have constructions that are simultaneously efficient and error-correcting.

  1. Error Correction and the Structure of Inter-Trial Fluctuations in a Redundant Movement Task.

    Science.gov (United States)

    John, Joby; Dingwell, Jonathan B; Cusumano, Joseph P

    2016-09-01

    We study inter-trial movement fluctuations exhibited by human participants during the repeated execution of a virtual shuffleboard task. Focusing on skilled performance, theoretical analysis of a previously-developed general model of inter-trial error correction is used to predict the temporal and geometric structure of variability near a goal equivalent manifold (GEM). The theory also predicts that the goal-level error scales linearly with intrinsic body-level noise via the total body-goal sensitivity, a new derived quantity that illustrates how task performance arises from the interaction of active error correction and passive sensitivity properties along the GEM. Linear models estimated from observed fluctuations, together with a novel application of bootstrapping to the estimation of dynamical and correlation properties of the inter-trial dynamics, are used to experimentally confirm all predictions, thus validating our model. In addition, we show that, unlike "static" variability analyses, our dynamical approach yields results that are independent of the coordinates used to measure task execution and, in so doing, provides a new set of task coordinates that are intrinsic to the error-regulation process itself.

  2. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2002-01-01

    [1]Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.[3]Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.[4]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.[5]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.[6]Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.[7]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.

  3. A Model for Geometry-Dependent Errors in Length Artifacts.

    Science.gov (United States)

    Sawyer, Daniel; Parry, Brian; Phillips, Steven; Blackburn, Chris; Muralikrishnan, Bala

    2012-01-01

    We present a detailed model of dimensional changes in long length artifacts, such as step gauges and ball bars, due to bending under gravity. The comprehensive model is based on evaluation of the gauge points relative to the neutral bending surface. It yields the errors observed when the gauge points are located off the neutral bending surface of a bar or rod but also reveals the significant error associated with out-of-straightness of a bar or rod even if the gauge points are located in the neutral bending surface. For example, one experimental result shows a length change of greater than 1.5 µm on a 1 m ball bar with an out-of-straightness of 0.4 mm. This and other results are in agreement with the model presented in this paper.

  4. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    Science.gov (United States)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  5. Error structure of enzyme kinetic experiments. Implications for weighting in regression analysis of experimental data.

    Science.gov (United States)

    Askelöf, P; Korsfeldt, M; Mannervik, B

    1976-10-01

    Knowledge of the error structure of a given set of experimental data is a necessary prerequisite for incisive analysis and for discrimination between alternative mathematical models of the data set. A reaction system consisting of glutathione S-transferase A (glutathione S-aryltransferase), glutathione, and 3,4-dichloro-1-nitrobenzene was investigated under steady-state conditions. It was found that the experimental error increased with initial velocity, v, and that the variance (estimated by replicates) could be described by a polynomial in v Var (v) = K0 + K1 - v + K2 - v2 or by a power function Var (v) = K0 + K1 - vK2. These equations were good approximations irrespective of whether different v values were generated by changing substrate or enzyme concentrations. The selection of these models was based mainly on experiments involving varying enzyme concentration, which, unlike v, is not considered a stochastic variable. Different models of the variance, expressed as functions of enzyme concentration, were examined by regression analysis, and the models could then be transformed to functions in which velocity is substituted for enzyme concentration owing to the proportionality between these variables. Thus, neither the absolute nor the relative error was independent of velocity, a result previously obtained for glutathione reductase in this laboratory [BioSystems 7, 101-119 (1975)]. If the experimental errors or velocities were standardized by division with their corresponding mean velocity value they showed a normal (Gaussian) distribution provided that the coefficient of variation was approximately constant for the data considered. Furthermore, it was established that the errors in the independent variables (enzyme and substrate concentrations) were small in comparison with the error in the velocity determinations. For weighting in regression analysis the inverted value of the local variance in each experimental point should be used. It was found that the

  6. Identifying errors in dust models from data assimilation.

    Science.gov (United States)

    Pope, R J; Marsham, J H; Knippertz, P; Brooks, M E; Roberts, A J

    2016-09-16

    Airborne mineral dust is an important component of the Earth system and is increasingly predicted prognostically in weather and climate models. The recent development of data assimilation for remotely sensed aerosol optical depths (AODs) into models offers a new opportunity to better understand the characteristics and sources of model error. Here we examine assimilation increments from Moderate Resolution Imaging Spectroradiometer AODs over northern Africa in the Met Office global forecast model. The model underpredicts (overpredicts) dust in light (strong) winds, consistent with (submesoscale) mesoscale processes lifting dust in reality but being missed by the model. Dust is overpredicted in the Sahara and underpredicted in the Sahel. Using observations of lighting and rain, we show that haboobs (cold pool outflows from moist convection) are an important dust source in reality but are badly handled by the model's convection scheme. The approach shows promise to serve as a useful framework for future model development.

  7. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  8. Comparative Protein Structure Modeling Using MODELLER.

    Science.gov (United States)

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  9. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  10. Assessing Numerical Error in Structural Dynamics Using Energy Balance

    Directory of Open Access Journals (Sweden)

    Rabindranath Andujar

    2013-01-01

    Full Text Available This work applies the variational principles of Lagrange and Hamilton to the assessment of numerical methods of linear structural analysis. Different numerical methods are used to simulate the behaviour of three structural configurations and benchmarked in their computation of the Lagrangian action integral over time. According to the principle of energy conservation, the difference at each time step between the kinetic and the strain energies must equal the work done by the external forces. By computing this difference, the degree of accuracy of each combination of numerical methods can be assessed. Moreover, it is often difficult to perceive numerical instabilities due to the inherent complexities of the modelled structures. By means of the proposed procedure, these complexities can be globally controlled and visualized in a straightforward way. The paper presents the variational principles to be considered for the collection and computation of the energy-related parameters (kinetic, strain, dissipative, and external work. It then introduces a systematic framework within which the numerical methods can be compared in a qualitative as well as in a quantitative manner. Finally, a series of numerical experiments is conducted using three simple 2D models subjected to the effect of four different dynamic loadings.

  11. Errors Made by Elementary Fourth Grade Students When Modelling Word Problems and the Elimination of Those Errors through Scaffolding

    Science.gov (United States)

    Ulu, Mustafa

    2017-01-01

    This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…

  12. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  13. Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system

    Science.gov (United States)

    Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena

    2010-05-01

    A post-processor that accounts for the hydrologic uncertainty in a probabilistic streamflow forecast system is necessary to account for the uncertainty introduced by the hydrological model. In this study different variants of an autoregressive error model that can be used as a post-processor for short to medium range streamflow forecasts, are evaluated. The deterministic HBV model is used to form the basis for the streamflow forecast. The general structure of the error models then used as post-processor is a first order autoregressive model of the form dt = αdt-1 + σɛt where dt is the model error (observed minus simulated streamflow) at time t, α and σ are the parameters of the error model, and ɛt is the residual error described through a probability distribution. The following aspects are investigated: (1) Use of constant parameters α and σ versus the use of state dependent parameters. The state dependent parameters vary depending on the states of temperature, precipitation, snow water equivalent and simulated streamflow. (2) Use of a Standard Normal distribution for ɛt versus use of an empirical distribution function constituted through the normalized residuals of the error model in the calibration period. (3) Comparison of two different transformations, i.e. logarithmic versus square root, that are applied to the streamflow data before the error model is applied. The reason for applying a transformation is to make the residuals of the error model homoscedastic over the range of streamflow values of different magnitudes. Through combination of these three characteristics, eight variants of the autoregressive post-processor are generated. These are calibrated and validated in 55 catchments throughout Norway. The discrete ranked probability score with 99 flow percentiles as standardized thresholds is used for evaluation. In addition, a non-parametric bootstrap is used to construct confidence intervals and evaluate the significance of the results. The main

  14. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  15. Structural Damage Detection Using Frequency Domain Error Localization.

    Science.gov (United States)

    1994-12-01

    113 rn ~l-,I T X ~oy Ul C 114 APPENDIX D. FE MODEL / COMPUTER CODES The following is a brief description of MATLAB routines employed in this thesis...R.R., Structural Dynamics, An Introduction to Computer Methods , pp. 383-387, John Wiley and Sons, Inc., 1981. 8. Guyan , R.J., "Reduction of Stiffness...official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE

  16. Medication errors in the intensive care unit: literature review using the SEIPS model.

    Science.gov (United States)

    Frith, Karen H

    2013-01-01

    Medication errors in intensive care units put patients at risk for injury or death every day. Safety requires an organized and systematic approach to improving the tasks, technology, environment, and organizational culture associated with medication systems. The Systems Engineering Initiative for Patient Safety model can help leaders and health care providers understand the complicated and high-risk work associated with critical care. Using this model, the author combines a human factors approach with the well-known structure-process-outcome model of quality improvement to examine research literature. The literature review reveals that human factors, including stress, high workloads, knowledge deficits, and performance deficits, are associated with medication errors. Factors contributing to medication errors are frequent interruptions, communication problems, and poor fit of health information technology to the workflow of providers. Multifaceted medication safety interventions are needed so that human factors and system problems can be addressed simultaneously.

  17. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  18. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  19. Error modelling and experimental validation of a planar 3-PPR parallel manipulator with joint clearances

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Usi...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....

  20. Bias Errors in Measurement of Vibratory Power and Implication for Active Control of Structural Vibration

    DEFF Research Database (Denmark)

    Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren

    1997-01-01

    of a degree for the phase. This implies that input power at a single point can be measured to within one dB in practical structures which possesses some damping. The uncertainty is increased, however, when sums of measured power contributions from more sources are to be minimised, as is the case in active...... control of vibratory power transmission into structures. This is demonstrated by computer simulations using a theoretical model of a beam structure which is driven by one primary source and two control sources. These simulations reveal the influence of residual errors on power measurements......, and the limitations imposed in active control of structural vibration based upon a strategy of power minimisation....

  1. Analysis and Correction of Systematic Height Model Errors

    Science.gov (United States)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  2. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  3. IDENTIFICATION ERROR BOUNDS AND ASYMPTOTIC DISTRIBUTIONS FOR SYSTEMS WITH STRUCTURAL UNCERTAINTIES

    Institute of Scientific and Technical Information of China (English)

    Gang George YIN; Shaobai KAN; Le Yi WANG

    2006-01-01

    This work is concerned with identification of systems that are subject to not only measurement noises, but also structural uncertainties such as unmodeled dynamics, sensor nonlinear mismatch,and observation bias. Identification errors are analyzed for their dependence on these structural uncertainties. Asymptotic distributions of scaled sequences of estimation errors are derived.

  4. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  5. On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

    CERN Document Server

    Julius,; T., Sumana; Adityakrishna, C S

    2016-01-01

    Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.

  6. A Systems Modeling Approach for Risk Management of Command File Errors

    Science.gov (United States)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  7. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  8. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  9. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  10. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    OpenAIRE

    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be ...

  11. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  12. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    Science.gov (United States)

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  13. Modelling Soft Error Probability in Firmware: A Case Study

    Directory of Open Access Journals (Sweden)

    DG Kourie

    2012-06-01

    Full Text Available This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, based on two possible types of disruptive events: a single spike of interference, and a burst of multiple spikes of interference.The models suggest that the system conforms to the IEC 61508 Safety Integrity Levels, even under very conservative assumptions of operation.The case study serves as a platform for future researchers to build on when probabilistic modelling soft errors in other contexts.

  14. Diagnosing Model Errors in Canopy-Atmosphere Exchange Using Empirical Orthogonal Functions

    Science.gov (United States)

    Drewry, D.; Albertson, J.

    2004-12-01

    Multi-layer canopy process models (MLCPMs) have been established as tools for estimating local-scale canopy-atmosphere scalar (carbon dioxide, heat and water vapor) exchange as well as testing hypotheses regarding the mechanistic functioning of complex vegetated land surfaces and the interactions between vegetation and the local microenvironment. These model frameworks are composed of a coupled set of component submodels relating radiation attenuation and absorption, photosynthesis, turbulent mixing, stomatal conductance, surface energy balance and soil and subsurface processes. Submodel formulations have been validated for a variety of ecosystems under varying environmental conditions. However, each submodel component requires parameter values that are known to vary seasonally as canopy structure changes, and over shorter periods characterized by shifts in the environmental regime. The temporal dependence of submodel parameters limits application of MLCPMs to short-term integrations for which a specific parameterization can be trusted. We present a novel application of empirical orthogonal function (EOF) analysis to the identification of the primary source of MLCPM error. Carbon dioxide (CO2) concentration profiles, a commonly collected and underutilized data source, are the observed quantity in this analysis. The technique relies on an ensemble of model runs transformed to EOF space to determine the characteristic patterns of model error associated with specific submodel parameters. These patterns provide a basis onto which error residual (modeled - measured) CO2 concentration profiles can be projected to identify the primary source of model error. Synthetic tests and application to field data collected at Duke Forest (North Carolina, USA) are presented.

  15. Uncertainty and error in complex plasma chemistry models

    Science.gov (United States)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  16. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  17. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  18. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  19. Influence of X-ray Powder Diffraction Instrument Error on Crystalline Structure Analysis

    Institute of Scientific and Technical Information of China (English)

    HUANG Qing-Ming; YU Jian-Chang; WANG Yun-Min; WU Wan-Guo

    2005-01-01

    Standard mica was used to correct the X-ray powder diffraction instrument error and mathematic methods were employed to find the correction equation. By analyzing mullite sample and comparing the corrected and uncorrected analysis results we found the former is obviously more reasonable. So the conclusion is that the X-ray powder diffraction instrument error greatly affects the crystalline structure analysis, and the above method is convenient and effective for the correction of instrument error.

  20. IDENTIFYING BANK LENDING CHANNEL IN INDONESIA: A VECTOR ERROR CORRECTION APPROACH WITH STRUCTURAL BREAK

    Directory of Open Access Journals (Sweden)

    Akhsyim Afandi

    2017-03-01

    Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.

  1. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  2. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-04-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane

  3. Error Analysis and Compensation Method on the Mechanical Structure of the Hydraulic Control System

    Directory of Open Access Journals (Sweden)

    Luo Yanyan

    2016-01-01

    Full Text Available Mechanical deformation of mechanical transmission part in hydraulic control system directly affects the loading accuracy of the system. For improving the mechanical properties of the system, The force analysis and motion analysis of mechanism are simulated based on the four-bar linkage structure (FLS, and kinematics simulation is designed by using Matlab program, then came to a system error bar graph. The system error was calculated accurately according to the results of the structural mechanics simulation made by Solidworks motion module. The structure of the system will be modified when systematic errors exceed the required limit values until it reach the required value.

  4. Universal geometric error modeling of the CNC machine tools based on the screw theory

    Science.gov (United States)

    Tian, Wenjie; He, Baiyan; Huang, Tian

    2011-05-01

    The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.

  5. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  6. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  7. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Energy Technology Data Exchange (ETDEWEB)

    Korn, E L

    1978-08-01

    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  8. Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts

    Science.gov (United States)

    2013-09-30

    tracer concentration measurements are available; circles indicate a regular 19 × 37 spatial grid. Time-Varying Error Covariance Models: Extending...2013. (Wikle) Invited; Using quadratic nonlinear statistical emulators to facilitate ocean biogeochemical data assimilation, Institute for

  9. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    OpenAIRE

    Nurhayadi; ., Subanar; Abdurakhman; Agus Maman Abadi

    2014-01-01

    Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced i...

  10. Error Modelling and Experimental Validation of a Planar 3-PPR Parallel Manipulator with Joint Clearances

    OpenAIRE

    Wu, Guanglei; Shaoping, Bai; Jørgen A., Kepler; Caro, Stéphane

    2012-01-01

    International audience; This paper deals with the error modelling and analysis of a 3-\\underline{P}PR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements a...

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, ar

  12. Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining.

    Science.gov (United States)

    Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek; Jaskolski, Mariusz; Rupp, Bernhard

    2014-05-01

    Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available), have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely necessary safeguarding

  13. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  14. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  15. An Activation-Based Model of Routine Sequence Errors

    Science.gov (United States)

    2015-04-01

    Occasionally, after completing a step, the screen cleared and the participants were interrupted to perform a simple arithmetic task; the interruption...accordance with the columnar data, the distribution of errors clusters around the +/-1 errors, and falls away in both directions as the error type gets...has been accessed in working memory, slowly decaying as time passes. Activation strength- ening is calculated according to: As = ln ( n ∑ j=1 t−dj

  16. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  17. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    Science.gov (United States)

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  18. Bayesian hierarchical error model for analysis of gene expression data

    National Research Council Canada - National Science Library

    Cho, HyungJun; Lee, Jae K

    2004-01-01

    .... Moreover, the same gene often shows quite heterogeneous error variability under different biological and experimental conditions, which must be estimated separately for evaluating the statistical...

  19. New mathematical model for error reduction of stressed lap

    Science.gov (United States)

    Zhao, Pu; Yang, Shuming; Sun, Lin; Shi, Xinyu; Liu, Tao; Jiang, Zhuangde

    2016-05-01

    Stressed lap, compared to traditional polishing methods, has high processing efficiency. However, this method has disadvantages in processing nonsymmetric surface errors. A basic-function method is proposed to calculate parameters for a stressed-lap polishing system. It aims to minimize residual errors and is based on a matrix and nonlinear optimization algorithm. The results show that residual root-mean-square could be >15% after one process for classical trefoil error. The surface period errors close to the lap diameter were removed efficiently, up to 50% material removal.

  20. Stochastic model error in the LANS-alpha and NS-alpha deconvolution models of turbulence

    CERN Document Server

    Olson, Eric

    2015-01-01

    This paper reports on a computational study of the model error in the LANS-alpha and NS-alpha deconvolution models of homogeneous isotropic turbulence. The focus is on how well the model error may be characterized by a stochastic force. Computations are also performed for a new turbulence model obtained as a rescaled limit of the deconvolution model. The technique used is to plug a solution obtained from direct numerical simulation of the incompressible Navier--Stokes equations into the competing turbulence models and to then compute the time evolution of the resulting residual. All computations have been done in two dimensions rather than three for convenience and efficiency. When the effective averaging length scale in any of the models is $\\alpha_0=0.01$ the time evolution of the root-mean-squared residual error grows as $\\sqrt t$. This growth rate is consistent with the hypothesis that the model error may be characterized by a stochastic force. When $\\alpha_0=0.20$ the residual error grows linearly. Linea...

  1. Allowing for model error in strong constraint 4D-Var

    Science.gov (United States)

    Howes, Katherine; Lawless, Amos; Fowler, Alison

    2016-04-01

    Four dimensional variational data assimilation (4D-Var) can be used to obtain the best estimate of the initial conditions of an environmental forecasting model, namely the analysis. In practice, when the forecasting model contains errors, the analysis from the 4D-Var algorithm will be degraded to allow for errors later in the forecast window. This work focusses on improving the analysis at the initial time by allowing for the fact that the model contains error, within the context of strong constraint 4D-Var. The 4D-Var method developed acknowledges the presence of random error in the model at each time step by replacing the observation error covariance matrix with an error covariance matrix that includes both observation error and model error statistics. It is shown that this new matrix represents the correct error statistics of the innovations in the presence of model error. A method for estimating this matrix using innovation statistics, without requiring prior knowledge of the model error statistics, is presented. The method is demonstrated numerically using a non-linear chaotic system with erroneous parameter values. We show that that the new method works to reduce the analysis error covariance when compared with a standard strong constraint 4D-Var scheme. We discuss the fact that an improved analysis will not necessarily provide a better forecast.

  2. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  3. Local concurrent error detection and correction in data structures using virtual backpointers

    Science.gov (United States)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  4. The Impact of Modeling Errors on Interferometer Calibration for 21 cm Power Spectra

    CERN Document Server

    Ewall-Wice, Aaron; Liu, Adrian; Hewitt, Jacqueline

    2016-01-01

    We study the impact of sky-based calibration errors from source mismodeling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogs is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky-response (the "wedge"). Thus, unmodeled foregrounds at the $\\approx 1$ mJy level introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and determine the impact of this bias on measurements of the 21 cm power spectrum during the Epoch of Reionization (EoR). Given our current precision in primary beam and foreground modeling, this noise will significantly impact the sensitivity of existing...

  5. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the re

  6. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    , such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  7. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    2012-01-01

    , such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  8. Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion

    Science.gov (United States)

    2015-03-27

    a dot product. The higher the value of the dot product then the more similar the vectors. For example, the dot product of two identical vectors is 1...camera positions is given by the following equation derived from trigonometric relationships: Φ = 2 tan−1( d 2H ) (3.14) where d is the distance...ground footprint of one image. From trigonometric relationships and the camera model equations, percent overlap is given as: Overlap = 2Htan(Θ 2

  9. Corruption of parameter behavior and regionalization by model and forcing data errors: A Bayesian example using the SNOW17 model

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-07-01

    The current study evaluates the impacts of various sources of uncertainty involved in hydrologic modeling on parameter behavior and regionalization utilizing different Bayesian likelihood functions and the Differential Evolution Adaptive Metropolis (DREAM) algorithm. The developed likelihood functions differ in their underlying assumptions and treatment of error sources. We apply the developed method to a snow accumulation and ablation model (National Weather Service SNOW17) and generate parameter ensembles to predict snow water equivalent (SWE). Observational data include precipitation and air temperature forcing along with SWE measurements from 24 sites with diverse hydroclimatic characteristics. A multiple linear regression model is used to construct regionalization relationships between model parameters and site characteristics. Results indicate that model structural uncertainty has the largest influence on SNOW17 parameter behavior. Precipitation uncertainty is the second largest source of uncertainty, showing greater impact at wetter sites. Measurement uncertainty in SWE tends to have little impact on the final model parameters and resulting SWE predictions. Considering all sources of uncertainty, parameters related to air temperature and snowfall fraction exhibit the strongest correlations to site characteristics. Parameters related to the length of the melting period also show high correlation to site characteristics. Finally, model structural uncertainty and precipitation uncertainty dramatically alter parameter regionalization relationships in comparison to cases where only uncertainty in model parameters or output measurements is considered. Our results demonstrate that accurate treatment of forcing, parameter, model structural, and calibration data errors is critical for deriving robust regionalization relationships.

  10. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  11. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  12. Regularized Structural Equation Modeling.

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J; McArdle, John J

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.

  13. Nuclear numerical range and quantum error correction codes for non-unitary noise models

    Science.gov (United States)

    Lipka-Bartosik, Patryk; Życzkowski, Karol

    2017-01-01

    We introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator A among normalized pure states, which belong to the nucleus of an auxiliary operator Z. This notion proves to be applicable to investigate models of quantum noise with block-diagonal structure of the corresponding Kraus operators. The problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane. This technique, worked out in the case of two-qubit systems, can be generalized for larger dimensions.

  14. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model

    Energy Technology Data Exchange (ETDEWEB)

    Altmeyer, S.; McCaskill, J. S.

    2001-06-18

    The error threshold for quasispecies in 1, 2, 3, and {infinity} dimensions is investigated by stochastic simulation and analytically. The results show a monotonic decrease in the maximal sustainable error probability with decreasing diffusion coefficient, independently of the spatial dimension. It is thereby established that physical interactions between sequences are necessary in order for spatial effects to enhance the stabilization of biological information. The analytically tractable behavior in an {infinity} -dimensional (simplex) space provides a good guide to the spatial dependence of the error threshold in lower dimensional Euclidean space.

  15. Structured methods for identifying and correcting potential human errors in space operations.

    Science.gov (United States)

    Nelson, W R; Haney, L N; Ostrom, L T; Richards, R E

    1998-01-01

    Human performance plays a significant role in the development and operation of any complex system, and human errors are significant contributors to degraded performance, incidents, and accidents for technologies as diverse as medical systems, commercial aircraft, offshore oil platforms, nuclear power plants, and space systems. To date, serious accidents attributed to human error have fortunately been rare in space operations. However, as flight rates go up and the duration of space missions increases, the accident rate could increase unless proactive action is taken to identity and correct potential human errors in space operations. The Idaho National Engineering and Environmental Laboratory (INEEL) has developed and applied structured methods of human error analysis to identify potential human errors, assess their effects on system performance, and develop strategies to prevent the errors or mitigate their consequences. These methods are being applied in NASA-sponsored programs to the domain of commercial aviation, focusing on airplane maintenance and air traffic management. The application of human error analysis to space operations could contribute to minimize the risks associated with human error in the design and operation of future space systems.

  16. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, D.

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  17. The impact of model and rainfall forcing errors on characterizing soil moisture uncertainty in land surface modeling

    Directory of Open Access Journals (Sweden)

    V. Maggioni

    2012-10-01

    Full Text Available The contribution of rainfall forcing errors relative to model (structural and parameter uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM, forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty or by adding randomly generated noise (representing model structure and parameter uncertainty to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  18. Statistical analysis-based error models for the Microsoft Kinect(TM) depth sensor.

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A

    2014-09-18

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions.

  19. Statistical Analysis-Based Error Models for the Microsoft Kinect™ Depth Sensor

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A.

    2014-01-01

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions. PMID:25237896

  20. Fourier decomposition of spatial localization errors reveals an idiotropic dominance of an internal model of gravity.

    Science.gov (United States)

    De Sá Teixeira, Nuno Alexandre

    2014-12-01

    Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.

  1. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  2. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  3. Making refractive error services sustainable: the International Eye Foundation model

    Directory of Open Access Journals (Sweden)

    Victoria M Sheffield

    2007-09-01

    Full Text Available The International Eye Foundation (IEF believes that the most effective strategy for making spectacles affordable and accessible is to integrate refractive error services into ophthalmic services and to run the refractive error service as a business – thereby making it sustainable. An optical service should be able to deal with high volumes of patients and generate enough revenue – not just to cover its own costs, but also to contribute to ophthalmic clinical services.

  4. Research of error structure of standard time signal synchronization system via digital television channels

    OpenAIRE

    Троцько, Максим Леонідович; Тріщ, Роман Михайлович

    2014-01-01

    The error structure of the standard time signal synchronization system via digital television channels was investigated. The relevance of this research is determined by changing the format of television broadcasting in Ukraine from analog to digital, which has necessitated the creation of a new standard time signal transmission system, adapted to the current format.An estimate of the basic permissible error of the system of standard time signal transmission via digital television channels, wh...

  5. The problem with total error models in establishing performance specifications and a simple remedy.

    Science.gov (United States)

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results.

  6. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Science.gov (United States)

    Monfardini, Elisabetta; Hadj-Bouziane, Fadila; Meunier, Martine

    2014-01-01

    Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  7. Dynamic term structure models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Meldrum, Andrew

    This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...

  8. Modeling and Sensitivity Analysis of Navigation Parameter Errors for Airborne Synthetic Aperture Radar Stereo Geolocation

    Institute of Scientific and Technical Information of China (English)

    PANG Lei; ZHANG Jixian; YAN Qin

    2010-01-01

    For the high-resolution airborne synthetic aperture radar (SAR) stereo geolocation application, the final geolocation accuracy is influenced by various error parameter sources. In this paper, an airborne SAR stereo geolocation parameter error model,involving the parameter errors derived from the navigation system on the flight platform, has been put forward. Moreover, a kind of near-direct method for modeling and sensitivity analysis of navigation parameter errors is also given. This method directly uses the ground reference to calculate the covariance matrix relationship between the parameter errors and the eventual geolocation errors for ground target points. In addition, utilizing true flight track parameters' errors, this paper gave a verification of the method and a corresponding sensitivity analysis for airborne SAR stereo geolocation model and proved its efficiency.

  9. Measuring errors and violations on the road: a bifactor modeling approach to the Driver Behavior Questionnaire.

    Science.gov (United States)

    Rowe, Richard; Roman, Gabriela D; McKenna, Frank P; Barker, Edward; Poulter, Damian

    2015-01-01

    The Driver Behavior Questionnaire (DBQ) is a self-report measure of driving behavior that has been widely used over more than 20 years. Despite this wealth of evidence a number of questions remain, including understanding the correlation between its violations and errors sub-components, identifying how these components are related to crash involvement, and testing whether a DBQ based on a reduced number of items can be effective. We address these issues using a bifactor modeling approach to data drawn from the UK Cohort II longitudinal study of novice drivers. This dataset provides observations on 12,012 drivers with DBQ data collected at .5, 1, 2 and 3 years after passing their test. A bifactor model, including a general factor onto which all items loaded, and specific factors for ordinary violations, aggressive violations, slips and errors fitted the data better than correlated factors and second-order factor structures. A model based on only 12 items replicated this structure and produced factor scores that were highly correlated with the full model. The ordinary violations and general factor were significant independent predictors of crash involvement at 6 months after starting independent driving. The discussion considers the role of the general and specific factors in crash involvement.

  10. Quantification of Transport Model Error Impacts on CO2 Inversions Using NASA's GEOS-5 GCM

    Science.gov (United States)

    Ott, L.; Pawson, S.; Weir, B.

    2014-12-01

    Remote sensing observations of CO2 offer the opportunity to reduce uncertainty in global carbon flux estimates. However, a number of studies have shown that inversion flux estimates are strongly influenced by errors in model transport. We will present results from modeling studies designed to quantify how such errors influence simulations of surface and column CO2 mixing ratios. These studies were conducted using the Goddard Earth Observing System, version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM) and the implementation of a suite of tracers associated with errors in boundary layer, convective, and large scale transport. Unlike traditional tagged tracers which are emitted by a certain process or region, error tracers are emitted as air parcels are transported through the atmosphere. The magnitude of error tracer emissions is based on previously published ensembles of AGCM simulations with perturbations to subgrid convective and boundary layer transport, and on comparisons of several reanalysis products to estimate errors in large scale wind fields. Transport error tracers are simulated with several different e-folding lifetimes (e.g. 1, 4, 10, and 30 day) to examine differences between transient and persistent model errors. This quantification of transport error is then used in an illustrative Bayesian synthesis inversion to demonstrate how transport errors influence surface CO2 mixing ratios and how this translates into inferred biosphere flux error.

  11. Spatial Distribution of the Errors in Modeling the Mid-Latitude Critical Frequencies by Different Models

    Science.gov (United States)

    Kilifarska, N. A.

    There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).

  12. Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.

  13. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  14. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence

    CERN Document Server

    Pastawski, Fernando; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an exact isometry from bulk operators to boundary operators. The entire tensor network is a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed by Almheiri et. al in arXiv:1411.70...

  15. Bayesian inversion of microtremor array dispersion data with hierarchical trans-dimensional earth and autoregressive error models

    Science.gov (United States)

    Molnar, S.; Dettmer, J.; Steininger, G.; Dosso, S. E.; Cassidy, J. F.

    2013-12-01

    This paper applies hierarchical, trans-dimensional Bayesian models for earth and residual-error parametrizations to the inversion of microtremor array dispersion data for shear-wave velocity (Vs) structure. The earth is parametrized in terms of flat-lying, homogeneous layers and residual errors are parametrized with a first-order autoregressive data-error model. The inversion accounts for the limited knowledge of the optimal earth and residual error model parametrization (e.g. the number of layers in the Vs profile) in the resulting Vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the index) are considered in the results. In addition, serial residual-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate residual-error statistics, and have no requirement for computing the inverse or determinant of a covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensions. The autoregressive process is restricted to first order and

  16. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  17. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  18. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  19. Error Propagation in Equations for Geochemical Modeling of Radiogenic Isotopes in Two-Component Mixing

    Indian Academy of Sciences (India)

    Surendra P Verma

    2000-03-01

    This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling and analytical errors. Two typical cases (``Small errors'' and ``Large errors'') are illustrated for mixing of Sr isotopes. Similar examples can be formulated for the other radiogenic isotopic ratios. Actual isotopic data for sediment and basalt samples from the Cocos plate are also included to further illustrate the use of these equations. The isotopic compositions of the predicted mixtures can be used to constrain the origin of magmas in the central part of the Mexican Volcanic Belt. These examples show the need of high quality experimental data for them to be useful in geochemical modeling of magmatic processes.

  20. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    Directory of Open Access Journals (Sweden)

    Nurhayadi

    2014-01-01

    Full Text Available Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced is Mean Square Error (MSE. Here, the analysis is done mathematically and the empirical study is done by applying translation to fuzzy model for enrollment forecasting at the Alabama University. The results of this analysis show that the translation in the extent of mean error can reduce the MSE.

  1. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Institute of Scientific and Technical Information of China (English)

    Qianjian GUO; Shuo FAN; Rufeng XU; Xiang CHENG; Guoyong ZHAO; Jianguo YANG

    2017-01-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools,spindle thermal error measurement,modeling and compensation of a two turntable five-axis machine tool are researched.Measurement experiment of heat sources and thermal errors are carried out,and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling.In order to analyze the influence of different heat sources on spindle thermal errors,an ANN (artificial neural network) model is presented,and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN,a new ABCNN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors.In order to test the prediction performance of ABC-NN model,an experiment system is developed,the prediction results of LSR (least squares regression),ANN and ABC-NN are compared with the measurement results of spindle thermal errors.Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN,and the residual error is smaller than 3 μm,the new modeling method is feasible.The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  2. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Science.gov (United States)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  3. Error assessment of digital elevation models obtained by interpolation

    Directory of Open Access Journals (Sweden)

    Jean François Mas

    2009-10-01

    Full Text Available Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE. Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representación de la superficie terrestre para el análisis del terreno y que son ampliamente utilizados en ciencias ambientales. Los resultados obtenidos muestran que la resolución, el método de interpolación y los insumos (curvas de nivel solas o con datos de escurrimientos y puntos acotados influyen de manera importante en la magnitud de la cantidad de los errores generados en el MDE. En este estudio, que se llevó a cabo con base en curvas de nivel cada 50 m en una zona montañosa, la resolución más idónea fue de 30 m. El MDE con el menor error (Error Medio Cuadrático −EMC− de 7.3 m fue obtenido con ARC/INFO. Sin embargo, programas sin costo como NEWMIEL o ILWIS permitieron la obtención de resultados con un EMC de 10 m.

  4. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and

  5. Impact of operational model nesting approaches and inherent errors for coastal simulations

    Science.gov (United States)

    Brown, Jennifer M.; Norman, Danielle L.; Amoudry, Laurent O.; Souza, Alejandro J.

    2016-11-01

    A region of freshwater influence (ROFI) under hypertidal conditions is used to demonstrate inherent problems for nested operational modelling systems. Such problems can impact the accurate simulation of freshwater export within shelf seas, so must be considered in coastal ocean modelling studies. In Liverpool Bay (our UK study site), freshwater inflow from 3 large estuaries forms a coastal front that moves in response to tides and winds. The cyclic occurrence of stratification and remixing is important for the biogeochemical cycles, as nutrient and pollutant loaded freshwater is introduced into the coastal system. Validation methods, using coastal observations from fixed moorings and cruise transects, are used to assess the simulation of the ROFI, through improved spatial structure and temporal variability of the front, as guidance for best practise model setup. A structured modelling system using a 180 m grid nested within a 1.8 km grid demonstrates how compensation for error at the coarser resolution can have an adverse impact on the nested, high resolution application. Using 2008, a year of typical calm and stormy periods with variable river influence, the sensitivities of the ROFI dynamics to initial and boundary conditions are investigated. It is shown that accurate representation of the initial water column structure is important at the regional scale and that the boundary conditions are most important at the coastal scale. Although increased grid resolution captures the frontal structure, the accuracy in frontal position is determined by the offshore boundary conditions and therefore the accuracy of the coarser regional model.

  6. Meta Modeling of Transmission Error for Spur, Helical and Planetary Gears for Wind Turbine Application

    OpenAIRE

    Irfan, Muhammad

    2013-01-01

    Detailed analysis of drive train dynamics requires accounting for the transmission error that arises in gears. However, the direct computation of the transmission error requires a 3-dimensional contact analysis with correct gear geometry, which is impractically computationally intense. Therefore, a simplified representation of the transmission error is desired, a so-called meta-model, is developed. The model is based on response surface method, and the coefficients of the angle-dependent tran...

  7. Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy

    Science.gov (United States)

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2016-03-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.

  8. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    Science.gov (United States)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  9. Correction of approximation errors with Random Forests applied to modelling of aerosol first indirect effect

    Directory of Open Access Journals (Sweden)

    A. Lipponen

    2013-04-01

    Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.

  10. Predictive vegetation modeling for conservation: impact of error propagation from digital elevation data.

    Science.gov (United States)

    Van Niel, Kimberly P; Austin, Mike P

    2007-01-01

    The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.

  11. Development of an RTK-GPS Positioning Application with an Improved Position Error Model for Smartphones

    Directory of Open Access Journals (Sweden)

    Dongha Lee

    2012-09-01

    Full Text Available This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  12. Development of an RTK-GPS positioning application with an improved position error model for smartphones.

    Science.gov (United States)

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-09-25

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  13. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  14. Global identifiability of linear structural equation models

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2010-01-01

    Structural equation models are multivariate statistical models that are defined by specifying noisy functional relationships among random variables. We consider the classical case of linear relationships and additive Gaussian noise terms. We give a necessary and sufficient condition for global identifiability of the model in terms of a mixed graph encoding the linear structural equations and the correlation structure of the error terms. Global identifiability is understood to mean injectivity of the parametrization of the model and is fundamental in particular for applicability of standard statistical methodology.

  15. Analysis of errors in spectral reconstruction with a Laplace transform pair model

    Energy Technology Data Exchange (ETDEWEB)

    Archer, B.R.; Bushong, S.C. (Baylor Univ., Houston, TX (USA). Coll. of Medicine); Wagner, L.K. (Texas Univ., Houston (USA). Dept. of Radiology); Johnston, D.A.; Almond, P.R. (Anderson (M.D.) Hospital and Tumor Inst., Houston, TX (USA))

    1985-05-01

    The sensitivity of a Laplace transform pair model for spectral reconstruction to random errors in attenuation measurements of diagnostic x-ray units has been investigated. No spectral deformation or significant alteration resulted from the simulated attenuation errors. It is concluded that the range of spectral uncertainties to be expected from the application of this model is acceptable for most scientific applications.

  16. Modeling Distance and Bandwidth Dependency of TOA-Based UWB Ranging Error for Positioning

    NARCIS (Netherlands)

    Bellusci, G.; Janssen, G.J.M.; Yan, J.; Tiberius, C.C.J.M.

    2009-01-01

    A statistical model for the range error provided by TOA estimation using UWB signals is given, based on UWB channel measurements between 3.1 and 10.6 GHz. The range error has been modeled as a Gaussian random variable for LOS and as a combination of a Gaussian and an exponential random variable for

  17. On the Influence of Weather Forecast Errors in Short-Term Load Forecasting Models

    OpenAIRE

    Fay, D; Ringwood, John; Condon, M.

    2004-01-01

    Weather information is an important factor in load forecasting models. This weather information usually takes the form of actual weather readings. However, online operation of load forecasting models requires the use of weather forecasts, with associated weather forecast errors. A technique is proposed to model weather forecast errors to reflect current accuracy. A load forecasting model is then proposed which combines the forecasts of several load forecasting models. This approach allows the...

  18. Probability-Based Diagnostic Imaging Technique Using Error Functions for Active Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin,

    2014-07-01

    Full Text Available This study presents a novel probability-based diagnostic imaging (PDI technique using error functions for active structural health monitoring (SHM. To achieve this, first the changes between baseline and current signals of each sensing path are measured, and by taking the root mean square of such changes, the energy of the scattered signal at different times can be calculated. Then, for different pairs of signal acquisition paths, an error function based on the energy of the scattered signals is introduced. Finally, the resultant error function is fused to the final estimation of the probability of damage presence in the monitoring area. As for applications, developed methods were employed to various damage identification cases, including cracks located in regions among an active sensor network with different configurations (pulse-echo and pitch-catch, and holes located in regions outside active network sensors with pitch-catch configuration. The results identified using experimental Lamb wave signals at different central frequencies corroborated that the developed PDI technique using error functions is capable of monitoring structural damage, regardless of its shape, size and location. The developed method doesn’t need direct interpretation of overlaid and dispersed lamb wave components for damage identification and can monitor damage located anywhere in the structure. These bright advantages, qualify the above presented PDI method for online structural health monitoring.

  19. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-09-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3% in bending angle, 0.1% in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above 35 km the increase of the CHAMP raw bending angle observational error is more pronounced than that of GRACE-A and F3C leading to a larger observational error of about 1% at 42 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process particularly under conditions when ionospheric residual is large. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters for the altitude range of 4 km to 35 km and up to 50 km for UCAR raw bending angle and refractivity. In the model, which accounts for vertical, latitudinal, and seasonal variations, a constant error is adopted around the tropopause region amounting to 0.8% for bending angle, 0.35% for refractivity, 0.15% for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases

  20. Error Modeling and Compensation of Circular Motion on a New Circumferential Drilling System

    Directory of Open Access Journals (Sweden)

    Qiang Fang

    2015-01-01

    Full Text Available A new flexible circumferential drilling system is proposed to drill on the fuselage docking area. To analyze the influence of the circular motion error to the drilling accuracy, the nominal forward kinematic model is derived using Denavit-Hartenberg (D-H method and this model is further developed to model the kinematic errors caused by circular positioning error and synchronization error using homogeneous transformation matrices (HTM. A laser tracker is utilized to measure the circular motion error of the two measurement points at both sides. A circular motion compensation experiment is implemented according to the calculated positioning error and synchronization error. Experimental results show that the positioning error and synchronization error were reduced by 65.0% and 58.8%, respectively, due to the adopted compensation, and therefore the circular motion accuracy is substantially improved. Finally, position errors of the two measurement points are analyzed to have little influence on the measurement result and the validity of the proposed compensation method is proved.

  1. Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model

    Directory of Open Access Journals (Sweden)

    N. Rahpoe

    2013-10-01

    Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.

  2. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  3. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  4. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  5. Detection of overlay error in double patterning gratings using phase-structured illumination.

    Science.gov (United States)

    Peterhänsel, Sandy; Gödecke, Maria Laura; Paz, Valeriano Ferreras; Frenner, Karsten; Osten, Wolfgang

    2015-09-21

    With the help of simulations we study the benefits of using coherent, phase-structured illumination to detect the overlay error in resist gratings fabricated by double patterning. Evaluating the intensity and phase distribution along the focused spot of a high numerical aperture microscope, the capability of detecting magnitude and direction of overlay errors in the range of a few nanometers is investigated for a wide range of gratings. Furthermore, two measurement approaches are presented and tested for their reliability in the presence of white Gaussian noise.

  6. Numerical study of an error model for a strap-down INS

    Science.gov (United States)

    Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.

    2016-10-01

    The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.

  7. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    OpenAIRE

    Jingan Feng; Xiaoqi Tang; Yanlei Li; Bao Song

    2016-01-01

    This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system) model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are use...

  8. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  9. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  10. Unravelling the Sources of Climate Model Errors in Subpolar Gyre Sea-Surface Temperatures

    Science.gov (United States)

    Rubino, Angelo; Zanchettin, Davide

    2017-04-01

    Climate model biases are systematic errors affecting geophysical quantities simulated by coupled general circulation models and Earth system models against observational targets. To this regard, biases affecting sea-surface temperatures (SSTs) are a major concern due to the crucial role of SST in the dynamical coupling between the atmosphere and the ocean, and for the associated variability. Strong SST biases can be detrimental for the overall quality of historical climate simulations, they contribute to uncertainty in simulated features of climate scenarios and complicate initialization and assessment of decadal climate prediction experiments. We use a dynamic linear model developed within a Bayesian hierarchical framework for a probabilistic assessment of spatial and temporal characteristics of SST errors in ensemble climate simulations. In our formulation, the statistical model distinguishes between local and regional errors, further separated into seasonal and non-seasonal components. This contribution, based on a framework developed for the study of biases in the Tropical Atlantic in the frame of the European project PREFACE, focuses on the subpolar gyre region in the North Atlantic Ocean, where climate models are typically affected by a strong cold SST bias. We will use results from an application of our statistical model to an ensemble of hindcasts with the MiKlip prototype system for decadal climate predictions to demonstrate how the decadal evolution of model errors toward the subpolar gyre cold bias is substantially shaped by a seasonal signal. We will demonstrate that such seasonal signal stems from the superposition of propagating large-scale seasonal errors originated in the Labrador Sea and of large-scale as well as mesoscale seasonal errors originated along the Gulf Stream. Based on these results, we will discuss how pronounced distinctive characteristics of the different error components distinguished by our model allow for a clearer connection

  11. A New Method for Identifying the Model Error of Adjustment System

    Institute of Scientific and Technical Information of China (English)

    TAO Benzao; ZHANG Chaoyu

    2005-01-01

    Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.

  12. Removing Specification Errors from the Usual Formulation of Binary Choice Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2016-06-01

    Full Text Available We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

  13. General expression of double ellipsoidal heat source model and its error analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In order to analyze the maximum power density error with different heat flux distribution parameter values for double ellipsoidal heat source model, a general expression of double ellipsoidal heat source model was derived from Goldak double ellipsoidal heat source model, and the error of maximum power density was analyzed under this foundation. The calculation error of thermal cycling parameters caused by the maximum power density error was compared quantitatively by numerical simulation. The results show that for guarantee the accuracy of welding numerical simulation, it is better to introduce an error correction coefficient into the Goldak double ellipsoidal heat source model expression. And, heat flux distribution parameter should get higher value for the higher power density welding methods.

  14. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  15. The impact of modelling errors on interferometer calibration for 21 cm power spectra

    Science.gov (United States)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline

    2017-09-01

    We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.

  16. Gossip quantity and network structure can counterbalance errors in transmission and promote the evolution of cooperation

    CERN Document Server

    Giardini, Francesca

    2016-01-01

    There are several mechanisms in human societies that help supporting cooperation. Gossiping, for instance, allows to identify defectors who can then be punished via ostracism. However, the evolutionary stability of gossip might be undermined by the fact that it is more error-prone that direct observation, whereas ostracism could be ineffective if the partner selection mechanism is not robust. The aim of this work is to investigate the conditions under which the combination of gossip and ostracism might support cooperation in groups of different sizes. We are also interested in exploring the extent to which errors in transmission might undermine the reliability of gossip as a mechanism for identifying defectors. Our results show that a large quantity of gossip is necessary to support cooperation, and that group structure can mitigate the effects of errors in transmission.

  17. Macroscopic model and truncation error of discrete Boltzmann method

    Science.gov (United States)

    Hwang, Yao-Hsin

    2016-10-01

    A derivation procedure to secure the macroscopically equivalent equation and its truncation error for discrete Boltzmann method is proffered in this paper. Essential presumptions of two time scales and a small parameter in the Chapman-Enskog expansion are disposed of in the present formulation. Equilibrium particle distribution function instead of its original non-equilibrium form is chosen as key variable in the derivation route. Taylor series expansion encompassing fundamental algebraic manipulations is adequate to realize the macroscopically differential counterpart. A self-contained and comprehensive practice for the linear one-dimensional convection-diffusion equation is illustrated in details. Numerical validations on the incurred truncation error in one- and two-dimensional cases with various distribution functions are conducted to verify present formulation. As shown in the computational results, excellent agreement between numerical result and theoretical prediction are found in the test problems. Straightforward extensions to more complicated systems including convection-diffusion-reaction, multi-relaxation times in collision operator as well as multi-dimensional Navier-Stokes equations are also exposed in the Appendix to point out its expediency in solving complicated flow problems.

  18. Structural model integrity

    Science.gov (United States)

    Wallerstein, D. V.; Lahey, R. S.; Haggenmacher, G. W.

    1977-01-01

    Many of the practical aspects and problems of ensuring the integrity of a structural model are discussed, as well as the steps which have been taken in the NASTRAN system to assure that these checks can be routinely performed. Model integrity as used applies not only to the structural model but also to the loads applied to the model. Emphasis is also placed on the fact that when dealing with substructure analysis, all of the checking procedures discussed should be applied at the lowest level of substructure prior to any coupling.

  19. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  20. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  1. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  2. Model of Head-Positioning Error Due to Rotational Vibration of Hard Disk Drives

    Science.gov (United States)

    Matsuda, Yasuhiro; Yamaguchi, Takashi; Saegusa, Shozo; Shimizu, Toshihiko; Hamaguchi, Tetsuya

    An analytical model of head-positioning error due to rotational vibration of a hard disk drive is proposed. The model takes into account the rotational vibration of the base plate caused by the reaction force of the head-positioning actuator, the relationship between the rotational vibration and head-track offset, and the sensitivity function of track-following feedback control. Error calculated by the model agrees well with measured error. It is thus concluded that this model can predict the data transfer performance of a disk drive in read mode.

  3. Artificial Error Tuning Based on Design a Novel SISO Fuzzy Backstepping Adaptive Variable Structure Control

    Directory of Open Access Journals (Sweden)

    Samaneh Zahmatkesh

    2013-10-01

    Full Text Available This paper examines single input single output (SISO chattering free variable structure control (VSC which controller coefficient is on-line tuned by fuzzy backstepping algorithm to control of continuum robot manipulator. Variable structure methodology is selected as a framework to construct the control law and address the stability and robustness of the close loop system based on Lyapunove formulation. The main goal is to guarantee acceptable error result and adjust the trajectory following. The proposed approach effectively combines the design technique from variable structure controller is based on Lyapunov and modified Proportional plus Derivative (P+D fuzzy estimator to estimate the nonlinearity of undefined system dynamic in backstepping controller. The input represents the function between variable structure function, error and the modified rate of error. The outputs represent joint torque, respectively. The fuzzy backstepping methodology is on-line tune the variable structure function based on adaptive methodology. The performance of the SISO VSC based on-line tuned by fuzzy backstepping algorithm (FBSAVSC is validated through comparison with VSC. Simulation results signify good performance of trajectory in presence of uncertainty joint torque load.

  4. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  5. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    Science.gov (United States)

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  6. Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error.

    Science.gov (United States)

    Musca, Serban C; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus

    2011-01-01

    Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  7. Data with hierarchical structure: impact of intraclass correlation and sample size on Type-I error

    Directory of Open Access Journals (Sweden)

    Serban C Musca

    2011-04-01

    Full Text Available Least squares analyses (e.g., ANOVAs, linear regressions of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design, the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment. To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965 correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  8. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    Directory of Open Access Journals (Sweden)

    Jingan Feng

    2016-01-01

    Full Text Available This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are used to monitor the spindle temperature and an inductive current sensor is used to obtain the spindle deformation. The experimental results display that the combined model can better predict the spindle deformation compared to BP network, and it can greatly improve the performance of the spindle.

  9. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  10. An Error Model for the Cirac-Zoller CNOT gate

    CERN Document Server

    Felloni, Sara

    2009-01-01

    In the framework of ion-trap quantum computing, we develop a characterization of experimentally realistic imperfections which may affect the Cirac-Zoller implementation of the CNOT gate. The CNOT operation is performed by applying a protocol of five laser pulses of appropriate frequency and polarization. The laser-pulse protocol exploits auxiliary levels, and its imperfect implementation leads to unitary as well as non-unitary errors affecting the CNOT operation. We provide a characterization of such imperfections, which are physically realistic and have never been considered before to the best of our knowledge. Our characterization shows that imperfect laser pulses unavoidably cause a leak of information from the states which alone should be transformed by the ideal gate, into the ancillary states exploited by the experimental implementation.

  11. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    Science.gov (United States)

    1979-08-01

    gradioemetry. The lower bound of "nl ?~ 147 1 mmmin, I tR~ it -ao r -p’.., r- , -. UNCLASSIFIED SECURIT \\, CLASSIFICATIONI 0- THIS PAGE(47hen Dftf...02)[go - " A] " (4.67) The percent error 2 due to scale factor mismatch is 4g O " 1 gi (102( =~ ~-a2) )~2 ’ (a (4.68) since goz > rz i typically...ALP 92960p ( 10) ALP’#2-&LP)c 2*L *AN.1)+4*ALP 2 LOG(ALP).2.8.ALe92.tLOQ(hl4Pv+sALP92o𔄂.ALP) 4G ?.f(t4eA.ALP92.8.AeA iL.)LO(ALP)2AAL,,*2AAR).LA.1

  12. Approach for wideband direction-of-arrival estimation in the presence of array model errors

    Institute of Scientific and Technical Information of China (English)

    Chen Deli; Zhang Cong; Tao Huamin; Lu Huanzhang

    2009-01-01

    The presence of array imperfection and mutual coupling in sensor arrays poses several challenges for development of effective algorithms for the direction-of-arrival (DOA) estimation problem in array processing. A correlation domain wideband DOA estimation algorithm without array calibration is proposed, to deal with these array model errors, using the arbitrary antenna array of omnidirectional elements. By using the matrix operators that have the memory and oblivion characteristics, this algorithm can separate the incident signals effectively. Compared with other typical wideband DOA estimation algorithms based on the subspace theory, this algorithm can get robust DOA estimation with regard to position error, gain-phase error, and mutual coupling, by utilizing a relaxation technique based on signal separation. The signal separation category and the robustness of this algorithm to the array model errors are analyzed and proved. The validity and robustness of this algorithm, in the presence of array model errors, are confirmed by theoretical analysis and simulation results.

  13. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  14. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  15. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    Science.gov (United States)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  16. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  17. 3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, A; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  18. 3D CMM Strain-Gauge Triggering Probe Error Characteristics Modeling

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, Adam; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  19. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    Science.gov (United States)

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  20. Structural Equation Model Trees

    Science.gov (United States)

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  1. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  2. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  3. Distortion Modeling and Error Robust Coding Scheme for H.26L Video

    Institute of Scientific and Technical Information of China (English)

    CHENChuan; YUSongyu; CHENGLianji

    2004-01-01

    Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.

  4. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  5. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  6. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    Science.gov (United States)

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  7. SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection

    Energy Technology Data Exchange (ETDEWEB)

    Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)

    2014-06-01

    Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the

  8. Fixing Geometric Errors on Polygonal Models: A Survey

    Institute of Scientific and Technical Information of China (English)

    Tao Ju

    2009-01-01

    Polygonal models are popular representations of 3D objects. The use of polygonal models in computational applications often requires a model to properly bound a 3D solid. That is, the polygonal model needs to be closed, manifold, and free of self-intersections. This paper surveys a sizeable literature for repairing models that do not satisfy this criteria, focusing on categorizing them by their methodology and capability. We hope to offer pointers to further readings for researchers and practitioners, and suggestions of promising directions for future research endeavors.

  9. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  10. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  11. Comparison Experiments of Different Model Error Schemes in Ensemble Kalman Filter Soil Moisture Assimilation

    Science.gov (United States)

    Nie, Suping; Zhu, Jiang; Luo, Yong

    2010-05-01

    The purpose of this study is to explore the performances of different model error scheme in soil moisture data assimilation. Based on the ensemble Kalman filter (EnKF) and the atmosphere-vegetation interaction model (AVIM), point-scale analysis results for three schemes, 1) covariance inflation (CI), 2) direct random disturbance (DRD), and 3) error source random disturbance (ESRD), are combined under conditions of different observational error estimations, different observation layers, and different observation intervals using a series of idealized experiments. The results shows that all these schemes obtain good assimilation results when the assumed observational error is an accurate statistical representation of the actual error used to perturb the original truth value, and the ESRD scheme has the least root mean square error (RMSE). Overestimation or underestimation of the observational errors can affect the assimilation results of CI and DRD schemes sensitively. The performances of these two schemes deteriorate obviously while the ESRD scheme keeps its capability well. When the observation layers or observation interval increase, the performances of both CI and DRD schemes decline evidently. But for the ESRD scheme, as it can assimilate multi-layer observations coordinately, the increased observations improve the assimilation results further. Moreover, as the ESRD scheme contains a certain amount of model error estimation functions in its assimilation process, it also has a good performance in assimilating sparse-time observations.

  12. Reporting error in weight and its implications for bias in economic models.

    Science.gov (United States)

    Cawley, John; Maclean, Johanna Catherine; Hammer, Mette; Wintfeld, Neil

    2015-12-01

    Most research on the economic consequences of obesity uses data on self-reported weight, which contains reporting error that has the potential to bias coefficient estimates in economic models. The purpose of this paper is to measure the extent and characteristics of reporting error in weight, and to examine its impact on regression coefficients in models of the healthcare consequences of obesity. We analyze data from the National Health and Nutrition Examination Survey (NHANES) for 2003-2010, which includes both self-reports and measurements of weight and height. We find that reporting error in weight is non-classical: underweight respondents tend to overreport, and overweight and obese respondents tend to underreport, their weight, with underreporting increasing in measured weight. This error results in roughly 1 out of 7 obese individuals being misclassified as non-obese. Reporting error is also correlated with other common regressors in economic models, such as education. Although it is a common misconception that reporting error always causes attenuation bias, comparisons of models that use self-reported and measured weight confirm that reporting error can cause upward bias in coefficient estimates. For example, use of self-reports leads to overestimates of the probability that an obese man uses a prescription drug, has a healthcare visit, or has a hospital admission. These findings underscore that models of the consequences of obesity should use measurements of weight, when available, and that social science datasets should measure weight rather than simply ask subjects to report their weight.

  13. Sources of Error in Synthetic Remote Sensing Data and Potential Impacts on Ecohydrological Models in Semiarid Rangelands

    Science.gov (United States)

    Olsoy, P.; Flores, A. N.; Glenn, N. F.

    2014-12-01

    Semiarid rangelands have a high level of both spatial and temporal vegetation heterogeneity due to slow net primary production rates and highly variable rainfall. Ecohydrological modeling in these ecosystems requires high resolution inputs of vegetation structure and function. We used the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to create eight synthetic Landsat TM images across a growing season (April - September). STARFM fuses the high spatial resolution of Landsat TM with the high temporal resolution of Terra MODIS. Previous attempts to assess the accuracy and quantify model errors of STARFM have used pixel-based regression and difference image analysis, as well as examining the distribution of those errors across land cover types. However, those model errors have not previously been compared to a null model (i.e., using the nearest available Landsat scene). If there is very little change occurring, then you would expect the model to have artificially high correlation coefficients and low error estimates. Additionally, we examined several other potential sources of error: i) time of year or season, ii) vegetation height class from airborne LiDAR, iii) solar radiation (i.e., aspect), and iv) snow. We found that STARFM added new information when compared to the null model, yet the null model was highly accurate during large parts of the growing season (June through September, r2 = 0.95 - 0.97) suggesting that simply reporting r2 values from pixel-based regression is insufficient to assess model accuracy. We found that areas with snow in the preceding model input imagery (NDSI > 0.4) increased errors threefold (RMSE(snow) = 0.3223, RMSE(not-snow) = 0.1017). We also found that pixels with shrub or tree vegetation (height > 0.3 m) tended to have higher errors when compared to ground or grass pixels. Finally, our results indicate that during the middle of the growing season, there are patterns in the error that relate to solar radiation with the

  14. Statistical analysis of error propagation from radar rainfall to hydrological models

    Directory of Open Access Journals (Sweden)

    D. Zhu

    2013-04-01

    Full Text Available This study attempts to characterise the manner with which inherent error in radar rainfall estimates input influence the character of the stream flow simulation uncertainty in validated hydrological modelling. An artificial statistical error model described by Gaussian distribution was developed to generate realisations of possible combinations of normalised errors and normalised bias to reflect the identified radar error and temporal dependence. These realisations were embedded in the 5 km/15 min UK Nimrod radar rainfall data and used to generate ensembles of stream flow simulations using three different hydrological models with varying degrees of complexity, which consists of a fully distributed physically-based model MIKE SHE, a semi-distributed, lumped model TOPMODEL and the unit hydrograph model PRTF. These models were built for this purpose and applied to the Upper Medway Catchment (220 km2 in South-East England. The results show that the normalised bias of the radar rainfall estimates was enhanced in the simulated stream flow and also the dominate factor that had a significant impact on stream flow simulations. This preliminary radar-error-generation model could be developed more rigorously and comprehensively for the error characteristics of weather radars for quantitative measurement of rainfall.

  15. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    OpenAIRE

    Wild, Oliver; Prather, Michael J.

    2006-01-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quant...

  16. Creation and implementation of department-wide structured reports: an analysis of the impact on error rate in radiology reports.

    Science.gov (United States)

    Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J

    2014-10-01

    The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.

  17. Modeling data revisions : Measurement error and dynamics of "true" values

    NARCIS (Netherlands)

    Jacobs, Jan P. A. M.; van Norden, Simon

    2011-01-01

    Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revis

  18. Modeling data revisions : Measurement error and dynamics of "true" values

    NARCIS (Netherlands)

    Jacobs, Jan P. A. M.; van Norden, Simon

    2011-01-01

    Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revis

  19. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  20. The Dynamic Modeling of Multiple Pairs of Spur Gears in Mesh, Including Friction and Geometrical Errors

    Directory of Open Access Journals (Sweden)

    Shengxiang Jia

    2003-01-01

    Full Text Available This article presents a dynamic model of three shafts and two pair of gears in mesh, with 26 degrees of freedom, including the effects of variable tooth stiffness, pitch and profile errors, friction, and a localized tooth crack on one of the gears. The article also details howgeometrical errors in teeth can be included in a model. The model incorporates the effects of variations in torsional mesh stiffness in gear teeth by using a common formula to describe stiffness that occurs as the gears mesh together. The comparison between the presence and absence of geometrical errors in teeth was made by using Matlab and Simulink models, which were developed from the equations of motion. The effects of pitch and profile errors on the resultant input pinion angular velocity coherent-signal of the input pinion's average are discussed by investigating some of the common diagnostic functions and changes to the frequency spectra results.

  1. Robust Modeling of Low-Cost MEMS Sensor Errors in Mobile Devices Using Fast Orthogonal Search

    Directory of Open Access Journals (Sweden)

    M. Tamazin

    2013-01-01

    Full Text Available Accessibility to inertial navigation systems (INS has been severely limited by cost in the past. The introduction of low-cost microelectromechanical system-based INS to be integrated with GPS in order to provide a reliable positioning solution has provided more wide spread use in mobile devices. The random errors of the MEMS inertial sensors may deteriorate the overall system accuracy in mobile devices. These errors are modeled stochastically and are included in the error model of the estimated techniques used such as Kalman filter or Particle filter. First-order Gauss-Markov model is usually used to describe the stochastic nature of these errors. However, if the autocorrelation sequences of these random components are examined, it can be determined that first-order Gauss-Markov model is not adequate to describe such stochastic behavior. A robust modeling technique based on fast orthogonal search is introduced to remove MEMS-based inertial sensor errors inside mobile devices that are used for several location-based services. The proposed method is applied to MEMS-based gyroscopes and accelerometers. Results show that the proposed method models low-cost MEMS sensors errors with no need for denoising techniques and using smaller model order and less computation, outperforming traditional methods by two orders of magnitude.

  2. A Framework for Structural Equation Models in General Pedigrees

    National Research Council Canada - National Science Library

    Morris, Nathan J; Elston, Robert C; Stein, Catherine M

    Background/Aims: Structural Equation Modeling (SEM) is an analysis approach that accounts for both the causal relationships between variables and the errors associated with the measurement of these variables...

  3. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  4. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  5. Statistical modeling and analysis of the influence of antenna polarization error on received power

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The problem of statistical modeling of antenna polarization error is studied and the statistical characteristics of antenna's received power are analyzed. A novel Stokes-vector-based method is presented to describe the conception of antenna's polarization purity. Statistical model of antenna's polarization error in polarization domain is then built up. When an antenna with polarization error of uniform distribution is illuminated by an arbitrary polarized incident field, the probability density of antenna's received power is derived analytically. Finally, a group of curves of deviation and standard deviation of received power are plotted numerically.

  6. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-06-28

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.

  7. Modelling soft error probability in firmware: A case study

    African Journals Online (AJOL)

    A rough and notional schematic of the components involved are supplied in ..... To date, this claim of potential electromagnetic interference is entirely a ... single spike case will illuminate the probabilistic model needed for the bursty case. For.

  8. Compliance Modeling and Error Compensation of a 3-Parallelogram Lightweight Robotic Arm

    DEFF Research Database (Denmark)

    2015-01-01

    This paper presents compliance modeling and error compensation for lightweight robotic arms built with parallelogram linkages, i.e., Π joints. The Cartesian stiffness matrix is derived using the virtual joint method. Based on the developed stiffness model, a method to compensate the compliance...... error is introduced, being illustrated with a 3-parallelogram robot in the application of pick-and-place operation. The results show that this compensation method can effectively improve the operation accuracy....

  9. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  10. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  11. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  12. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Deepak Bhatt

    2012-07-01

    Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.

  13. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  14. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

    Science.gov (United States)

    Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

    2014-01-01

    The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

  15. Optimal control design that accounts for model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  16. On the modelling of excitations in geared systems by transmission errors

    Science.gov (United States)

    Velex, P.; Ajmi, M.

    2006-03-01

    This paper introduces an original theoretical approach to the modelling of pinion-gear excitations valid for three-dimensional models of single-stage geared transmissions. Shape deviations and errors on gears are considered and the associated equations of motion account for time-varying mesh stiffness, and also torsional, flexural and axial couplings. Starting from the instantaneous contact conditions between the teeth, the equations of motion are re-formulated in terms of quasi-static transmission errors under load and no-load transmission errors. The range of application of transmission error-based formulations is analysed and some new equations are proposed which make it possible to introduce rigorously meshing excitations via transmission errors. Using an extended finite element model of a spur and helical gear test rig, the dynamic results from the formulations based on transmission errors are compared with the reference solutions. Both sets of results are found to be in close agreement, thus validating the proposed theory. The paper concludes with a critical analysis of the interests and limitations concerning the concept of transmission errors as excitation terms in gear dynamics.

  17. A Nonlinear Multiparameters Temperature Error Modeling and Compensation of POS Applied in Airborne Remote Sensing System

    Directory of Open Access Journals (Sweden)

    Jianli Li

    2014-01-01

    Full Text Available The position and orientation system (POS is a key equipment for airborne remote sensing systems, which provides high-precision position, velocity, and attitude information for various imaging payloads. Temperature error is the main source that affects the precision of POS. Traditional temperature error model is single temperature parameter linear function, which is not sufficient for the higher accuracy requirement of POS. The traditional compensation method based on neural network faces great problem in the repeatability error under different temperature conditions. In order to improve the precision and generalization ability of the temperature error compensation for POS, a nonlinear multiparameters temperature error modeling and compensation method based on Bayesian regularization neural network was proposed. The temperature error of POS was analyzed and a nonlinear multiparameters model was established. Bayesian regularization method was used as the evaluation criterion, which further optimized the coefficients of the temperature error. The experimental results show that the proposed method can improve temperature environmental adaptability and precision. The developed POS had been successfully applied in airborne TSMFTIS remote sensing system for the first time, which improved the accuracy of the reconstructed spectrum by 47.99%.

  18. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  19. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor

    Directory of Open Access Journals (Sweden)

    Lyvia Biagi

    2017-06-01

    Full Text Available Continuous glucose monitors (CGMs are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL sensor and compare it with the Dexcom SEVEN PLUS (7P, G4 PLATINUM (G4P, and advanced G4 for Artificial Pancreas studies (G4AP systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D, in which CGMs were worn in parallel and blood glucose (BG samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  20. Modeling And Analysis Of The Surface Roughness And Geometrical Error Using Taguchi And Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    DR.S.C.JAYSWAL

    2011-07-01

    Full Text Available This experimental work presents a technique to determine the better surface quality by controlling the surface roughness and geometrical error. In machining operations, achieving desired surface quality features of the machined product is really a challenging job. Because, these quality features are highly correlated and areexpected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects. Thus The four input process parameters such as spindle speed, depth of cut, feed rate, and stepover have been selected to minimize the surface roughness and geometrical error simultaneously by using the robustdesign concept of Taguchi L9(34 method coupled with Response surface concept. Mathematical models for surface roughness and geometrical error were obtained from response surface analysis to predict values of surface roughness and geometrical error. S/N ratio and ANOVA analyses were also performed to obtain for significant parameters influencing surface roughness and geometrical error.

  1. A hierarchical Bayes error correction model to explain dynamic effects

    NARCIS (Netherlands)

    D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2004-01-01

    textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio

  2. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    Science.gov (United States)

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  3. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2017-06-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  4. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2016-07-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  5. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  6. Error and Uncertainty Analysis for Ecological Modeling and Simulation

    Science.gov (United States)

    2001-12-01

    Delfiner , 1999; Goovaerts, 1997; Journel and Huijbregts, 1978). These methods are based on the spatial variability theory, that is, spatial...mathematics on the sequential Gaussian simulation, the reader is referred to Chiles and Delfiner (1999) and Goovaerts (1997). 66 UI NRES...World Congress 2000. 7-12 August 2000, Kuala Lumpur Asia. (Ed. Barbara Koch). In press. Chiles, J.P. and P. Delfiner , 1999. Geostatistics: Modeling

  7. Experiments in Error Propagation within Hierarchal Combat Models

    Science.gov (United States)

    2015-09-01

    and variances of Blue MTTK, Red MTTK, and P[Blue Wins] by Experimental Design are statistically different (Wackerly, Mendenhall III and Schaeffer...2008). Although the data is not normally distributed, the t-test is robust to non-normality (Wackerly, Mendenhall III and Schaeffer 2008). There is...this is handled by transforming the predicted values with a natural logarithm (Wackerly, Mendenhall III and Schaeffer 2008). The model considers

  8. A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding

    CERN Document Server

    Holbrook, J K; Mahesh, K; Holbrook, Jennifer K.; Eiselt, Kurt P.; Mahesh, Kavi

    1994-01-01

    The development of models of human sentence processing has traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own task-specific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in parallel to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding is this: the human sente...

  9. Error detection in GPS observations by means of Multi-process models

    DEFF Research Database (Denmark)

    Thomsen, Henrik F.

    2001-01-01

    The main purpose of this article is to present the idea of using Multi-process models as a method of detecting errors in GPS observations. The theory behind Multi-process models, and double differenced phase observations in GPS is presented shortly. It is shown how to model cycle slips in the Multi...

  10. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between t

  11. Neighboring extremal optimal control design including model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  12. Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.

    Science.gov (United States)

    Bonate, Peter L

    2013-01-01

    Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.

  13. A Structured Methodology for Spreadsheet Modelling

    CERN Document Server

    Knight, Brian; Rajalingham, Kamalesen

    2008-01-01

    In this paper, we discuss the problem of the software engineering of a class of business spreadsheet models. A methodology for structured software development is proposed, which is based on structured analysis of data, represented as Jackson diagrams. It is shown that this analysis allows a straightforward modularisation, and that individual modules may be represented with indentation in the block-structured form of structured programs. The benefits of structured format are discussed, in terms of comprehensibility, ease of maintenance, and reduction in errors. The capability of the methodology to provide a modular overview in the model is described, and examples are given. The potential for a reverse-engineering tool, to transform existing spreadsheet models is discussed.

  14. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    Science.gov (United States)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  15. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    Science.gov (United States)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  16. An empirical assessment of exposure measurement error and effect attenuation in bipollutant epidemiologic models.

    Science.gov (United States)

    Dionisio, Kathie L; Baxter, Lisa K; Chang, Howard H

    2014-11-01

    Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from multipollutant models difficult to interpret. We aimed to quantify relationships between multiple pollutants and their associated exposure errors across metrics of exposure and to use empirical values to evaluate potential attenuation of coefficients in epidemiologic models. We used three daily exposure metrics (central-site measurements, air quality model estimates, and population exposure model estimates) for 193 ZIP codes in the Atlanta, Georgia, metropolitan area from 1999 through 2002 for PM2.5 and its components (EC and SO4), as well as O3, CO, and NOx, to construct three types of exposure error: δspatial (comparing air quality model estimates to central-site measurements), δpopulation (comparing population exposure model estimates to air quality model estimates), and δtotal (comparing population exposure model estimates to central-site measurements). We compared exposure metrics and exposure errors within and across pollutants and derived attenuation factors (ratio of observed to true coefficient for pollutant of interest) for single- and bipollutant model coefficients. Pollutant concentrations and their exposure errors were moderately to highly correlated (typically, > 0.5), especially for CO, NOx, and EC (i.e., "local" pollutants); correlations differed across exposure metrics and types of exposure error. Spatial variability was evident, with variance of exposure error for local pollutants ranging from 0.25 to 0.83 for δspatial and δtotal. The attenuation of model coefficients in single- and bipollutant epidemiologic models relative to the true value differed across types of exposure error, pollutants, and space. Under a classical exposure-error framework, attenuation may be

  17. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  18. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  19. ANALISIS INFLASI DI SUMATERA UTARA: SUATU MODEL ERROR CORRECTION (ECM

    Directory of Open Access Journals (Sweden)

    Hafsyah Aprilia

    2012-06-01

    Full Text Available The research was conducted to determine the effect of economic variables that can explain the change or variation in the rate of inflation in the Consumer Price Index (CPI as the dependent variable. The explanatory variables (independent were used as controls are SBI, the nominal interest rate spread (SBI and the value of the rupiah against the U.S. dollar. Based on these results, according to the specific purpose of the model equations II, suggested economic actors can use SBI interest rate spread as an indicator of variations in the CPI inflation rate at intervals of 8 and 12 months, with a note that the obtained level of explanation has not shown that the optimal value

  20. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... technique in the design procedure is a suitable approach to handle these complex tasks. As there is no unied design guideline for the parallel manipulators, the study described in this thesis aims to provide a systematic analysis for this type of mechanisms in the early design stage, focusing on accuracy...... analysis and design optimization. The proposed approach is illustrated with the planar and spherical parallel manipulators. The geometric design, kinematic and dynamic analysis, kinetostatic modeling and stiness analysis are also presented. Firstly, the study on the geometric architecture and kinematic...

  1. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    Science.gov (United States)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  2. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  3. An attempt to lower sources of systematic measurement error using Hierarchical Generalized Linear Modeling (HGLM).

    Science.gov (United States)

    Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

    2014-01-01

    The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion.

  4. Performance of cumulant-based rank reduction estimator in presence of unexpected modeling errors

    Institute of Scientific and Technical Information of China (English)

    王鼎

    2015-01-01

    Compared with the rank reduction estimator (RARE) based on second-order statistics (called SOS-RARE), the RARE based on fourth-order cumulants (referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival (DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding (DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error (MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.

  5. Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors

    Institute of Scientific and Technical Information of China (English)

    Xiao-bing ZHAO; Xian ZHOU

    2012-01-01

    This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.

  6. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    Science.gov (United States)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  7. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    Directory of Open Access Journals (Sweden)

    Labudde Dirk

    2009-06-01

    Full Text Available Abstract Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS. PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that

  8. Consistent Fundamental Matrix Estimation in a Quadratic Measurement Error Model Arising in Motion Analysis

    OpenAIRE

    Kukush, A.; Markovsky, I.; Van Huffel, S.

    2002-01-01

    Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.

  9. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  10. High dimensional linear regression models under long memory dependence and measurement error

    Science.gov (United States)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  11. Using Errors to Teach through a Two-Staged, Structured Review: Peer-Reviewed Quizzes and "What's Wrong with Me?"

    Science.gov (United States)

    Coppola, Brian P.; Pontrello, Jason K.

    2014-01-01

    Using errors as a method of learning has been made explicit through a two-staged peer review and discussion. During organic chemistry discussion sessions, quizzes are followed by a structured peer review designed to help students identify and discuss student errors. After the face-to-face discussion, a second stage of review involves analyzing and…

  12. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    Science.gov (United States)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  13. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  14. An MEG signature corresponding to an axiomatic model of reward prediction error.

    Science.gov (United States)

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-01

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data.

  15. Active Magnetic Bearing Rotor Model Updating Using Resonance and MAC Error

    Directory of Open Access Journals (Sweden)

    Yuanping Xu

    2015-01-01

    Full Text Available Modern control techniques can improve the performance and robustness of a rotor active magnetic bearing (AMB system. Since those control methods usually rely on system models, it is important to obtain a precise rotor AMB analytical model. However, the interference fits and shrink effects of rotor AMB cause inaccuracy to the final system model. In this paper, an experiment based model updating method is proposed to improve the accuracy of the finite element (FE model used in a rotor AMB system. Modelling error is minimized by applying a numerical optimization Nelder-Mead simplex algorithm to properly adjust FE model parameters. Both the error resonance frequencies and modal assurance criterion (MAC values are minimized simultaneously to account for the rotor natural frequencies as well as for the mode shapes. Verification of the updated rotor model is performed by comparing the experimental and analytical frequency response. The close agreements demonstrate the effectiveness of the proposed model updating methodology.

  16. Admissibilities of linear estimator in a class of linear models with a multivariate t error variable

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.

  17. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  18. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  19. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  20. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  1. Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair

    Science.gov (United States)

    Goodson, Troy D.

    2013-01-01

    The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.

  2. Incorporating experimental design and error into coalescent/mutation models of population history.

    Science.gov (United States)

    Knudsen, Bjarne; Miyamoto, Michael M

    2007-08-01

    Coalescent theory provides a powerful framework for estimating the evolutionary, demographic, and genetic parameters of a population from a small sample of individuals. Current coalescent models have largely focused on population genetic factors (e.g., mutation, population growth, and migration) rather than on the effects of experimental design and error. This study develops a new coalescent/mutation model that accounts for unobserved polymorphisms due to missing data, sequence errors, and multiple reads for diploid individuals. The importance of accommodating these effects of experimental design and error is illustrated with evolutionary simulations and a real data set from a population of the California sea hare. In particular, a failure to account for sequence errors can lead to overestimated mutation rates, inflated coalescent times, and inappropriate conclusions about the population. This current model can now serve as a starting point for the development of newer models with additional experimental and population genetic factors. It is currently implemented as a maximum-likelihood method, but this model may also serve as the basis for the development of Bayesian approaches that incorporate experimental design and error.

  3. Partial Least Squares Structural Equation Modeling with R

    Science.gov (United States)

    Ravand, Hamdollah; Baghaei, Purya

    2016-01-01

    Structural equation modeling (SEM) has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and…

  4. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    Science.gov (United States)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  5. A flexible additive inflation scheme for treating model error in ensemble Kalman Filters

    Science.gov (United States)

    Sommer, Matthias; Janjic, Tijana

    2017-04-01

    Data assimilation algorithms require an accurate estimate of the uncertainty of the prior, background, field. However, the background error covariance derived from the ensemble of numerical model simulations does not adequately represent the uncertainty of it. This is partially due to the sampling error that arises from the use of a small number of ensemble members to represent the background error covariance. It is also partially a consequence of the fact that the model does not represent its own error. Several mechanisms have been introduced so far aiming at alleviating the detrimental e ffects of misrepresented ensemble covariances, allowing for the successful implementation of ensemble data assimilation techniques for atmospheric dynamics. One of the established approaches in ensemble data assimilation is additive inflation which perturbs each ensemble member with a sample from a given distribution. This results in a fixed rank of the model error covariance matrix. Here, a more flexible approach is suggested where the model error samples are treated as additional synthetic ensemble members which are used in the update step of data assimilation but are not forecast. In this way, the rank of the model error covariance matrix can be chosen independently of the ensemble. The eff ect of this altered additive inflation method on the performance of the filter is analyzed here in an idealised experiment. It is shown that the additional synthetic ensemble members can make it feasible to achieve convergence in an otherwise divergent setting of data assimilation. The use of this method also allows for a less stringent localization radius.

  6. High resolution modeling of CO2 over Europe: implications for representation errors of satellite retrievals

    Directory of Open Access Journals (Sweden)

    T. Koch

    2010-01-01

    Full Text Available Satellite retrievals for column CO2 with better spatial and temporal sampling are expected to improve the current surface flux estimates of CO2 via inverse techniques. However, the spatial scale mismatch between remotely sensed CO2 and current generation inverse models can induce representation errors, which can cause systematic biases in flux estimates. This study is focused on estimating these representation errors associated with utilization of satellite measurements in global models with a horizontal resolution of about 1 degree or less. For this we used simulated CO2 from the high resolution modeling framework WRF-VPRM, which links CO2 fluxes from a diagnostic biosphere model to a weather forecasting model at 10×10 km2 horizontal resolution. Sub-grid variability of column averaged CO2, i.e. the variability not resolved by global models, reached up to 1.2 ppm with a median value of 0.4 ppm. Statistical analysis of the simulation results indicate that orography plays an important role. Using sub-grid variability of orography and CO2 fluxes as well as resolved mixing ratio of CO2, a linear model can be formulated that could explain about 50% of the spatial patterns in the systematic (bias or correlated error component of representation error in column and near-surface CO2 during day- and night-times. These findings give hints for a parameterization of representation error which would allow for the representation error to taken into account in inverse models or data assimilation systems.

  7. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  8. Building a Structural Model: Parameterization and Structurality

    Directory of Open Access Journals (Sweden)

    Michel Mouchart

    2016-04-01

    Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.

  9. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  10. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  11. A Percentile Regression Model for the Number of Errors in Group Conversation Tests.

    Science.gov (United States)

    Liski, Erkki P.; Puntanen, Simo

    A statistical model is presented for analyzing the results of group conversation tests in English, developed in a Finnish university study from 1977 to 1981. The model is illustrated with the findings from the study. In this study, estimates of percentile curves for the number of errors are of greater interest than the mean regression line. It was…

  12. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  13. Automated evolutionary restructuring of workflows to minimise errors via stochastic model checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter

    2014-01-01

    This paper presents a framework for the automated restructuring of workflows that allows one to minimise the impact of errors on a production workflow. The framework allows for the modelling of workflows by means of a formalised subset of the Business Process Modelling and Notation (BPMN) language...

  14. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  15. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effec

  16. Modelling for registration of remotely sensed imagery when reference control points contain error

    Institute of Scientific and Technical Information of China (English)

    GE; Yong; Leung; Yee; MA; Jianghong; WANG; Jinfeng

    2006-01-01

    Reference control points (RCPs) used in establishing the regression model in the registration or geometric correction of remote sensing images are generally assumed to be "perfect". That is, the RCPs, as explanatory variables in the regression equation, are accurate and the coordinates of their locations have no errors. Thus ordinary least squares (OLS) estimator has been applied extensively to the registration or geometric correction of remotely sensed data. However, this assumption is often invalid in practice because RCPs always contain errors. Moreover, the errors are actually one of the main sources which lower the accuracy of geometric correction of an uncorrected image. Under this situation, the OLS estimator is biased. It cannot handle explanatory variables with errors and cannot propagate appropriately errors from the RCPs to the corrected image. Therefore, it is essential to develop new feasible methods to overcome such a problem. This paper introduces a consistent adjusted least squares (CALS) estimator and proposes relaxed consistent adjusted least squares (RCALS) estimator, with the latter being more general and flexible, for geometric correction or registration. These estimators have good capability in correcting errors contained in the RCPs, and in propagating appropriately errors of the RCPs to the corrected image with and without prior information.The objective of the CALS and proposed RCALS estimators is to improve the accuracy of measurement value by weakening the measurement errors. The conceptual arguments are substantiated by a real remotely sensed data. Compared to the OLS estimator, the CALS and RCALS estimators give a superior overall performance in estimating the regression coefficients and variance of measurement errors.

  17. Quality specifications for glucose meters: assessment by simulation modeling of errors in insulin dose.

    Science.gov (United States)

    Boyd, J C; Bruns, D E

    2001-02-01

    Proposed quality specifications for glucose meters allow results to be in error by 5-10% or more of the "true" concentration. Because meters are used as aids in the adjustment of insulin doses, we aimed to characterize the quantitative effect of meter error on the ability to identify the insulin dose appropriate for the true glucose concentration. Using Monte Carlo simulation, we generated random "true" glucose values within defined intervals. These values were converted to "measured" glucose values using mathematical models of glucose meters having defined imprecision (CV) and bias. For each combination of bias and imprecision, 10,000-20,000 true and measured glucose concentrations were matched with the corresponding insulin doses specified by selected insulin-dosing regimens. Discrepancies in prescribed doses were counted and their frequencies plotted in relation to bias and imprecision. For meters with a total analytical error of 5%, dosage errors occurred in approximately 8-23% of insulin doses. At 10% total error, 16-45% of doses were in error. Large errors of insulin dose (two-step or greater) occurred >5% of the time when the CV and/or bias exceeded 10-15%. Total dosage error rates were affected only slightly by choices of sliding scale among insulin dosage rules or by the range of blood glucose. To provide the intended insulin dosage 95% of the time required that both the bias and the CV of the glucose meter be <1% or <2%, depending on mean glucose concentrations and the rules for insulin dosing. Glucose meters that meet current quality specifications allow a large fraction of administered insulin doses to differ from the intended doses. The effects of such dosage errors on blood glucose and on patient outcomes require study.

  18. Systematic Geometric Error Modeling for Workspace Volumetric Calibration of a 5-axis Turbine Blade Grinding Machine

    Institute of Scientific and Technical Information of China (English)

    Abdul Wahid Khan; Chen Wuyi

    2010-01-01

    A systematic geometric model has been presented for calibration of a newly designed 5-axis turbine blade grinding machine.This machine is designed to serve a specific purpose to attain high accuracy and high efficiency grinding of turbine blades by eliminating the hand grinding process.Although its topology is RPPPR (P:prismatic;R:rotary),its design is quite distinct from the competitive machine tools.As error quantification is the only way to investigate,maintain and improve its accuracy,calibration is recommended for its performance assessment and acceptance testing.Systematic geometric error modeling technique is implemented and 52 position dependent and position independent errors are identified while considering the machine as five rigid bodies by eliminating the set-up errors ofworkpiece and cutting tool.39 of them are found to have influential errors and are accommodated for finding the resultant effect between the cutting tool and the workpiece in workspace volume.Rigid body kinematics techniques and homogenous transformation matrices are used for error synthesis.

  19. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...

  20. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    Science.gov (United States)

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  1. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    Science.gov (United States)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The

  2. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    Science.gov (United States)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors

  3. Research on identifying the dynamic error model of strapdown gyro on 3-axis turntable

    Institute of Scientific and Technical Information of China (English)

    WANG Hai; REN Shun-qing; WANG Chang-hong

    2005-01-01

    The dynamic errors of gyros are the important error sources of a strapdown inertial navigation system.In order to identify the dynamic error model coefficients accurately, the static erTor model coefficients which lay a foundation for compensating while identifying the dynamic error model are identified in the gravity acceleration fields by using angular position function of the three-axis turntable. The angular acceleration and angular velocity are excited on the input, output and spin axis of the gyros when the outer axis and the middle axis of a threeaxis turntable are in the uniform angular velocity state simultaneously, while the inner axis of the turntable is in different static angular positions. 8 groups of data are sampled when the inner axis is in 8 different angular positions. These data are the function of the middle axis positions and the inner axis positions. For these data, harmonic analysis method is applied two times versus the middle axis positions and inner axis positions respectively so that the dynamic error model coefficients are finally identified through the least square method. In the meantime the optimal angular velocity of the outer axis and the middle axis are selected by computing the determination value of the information matrix.

  4. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  5. Thermal Error Modeling of a Machining Center Using Grey System Theory and Adaptive Network-Based Fuzzy Inference System

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chung; Lin, Kuo-Ming

    Thermal effect on machine tools is a well-recognized problem in an environment of increasing demand for product quality. The performance of a thermal error compensation system typically depends on the accuracy and robustness of the thermal error model. This work presents a novel thermal error model utilizing two mathematic schemes: the grey system theory and the adaptive network-based fuzzy inference system (ANFIS). First, the measured temperature and deformation results are analyzed via the grey system theory to obtain the influence ranking of temperature ascent on thermal drift of spindle. Then, using the highly ranked temperature ascents as inputs for the ANFIS and training these data by the hybrid learning rule, a thermal compensation model is constructed. The grey system theory effectively reduces the number of temperature sensors needed on a machine structure for prediction, and the ANFIS has the advantages of good accuracy and robustness. For testing the performance of proposed ANFIS model, a real-cutting operation test was conducted. Comparison results demonstrate that the modeling schemes of the ANFIS coupled with the grey system theory has good predictive ability.

  6. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  7. An EOQ model for imperfect quality items with partial backordering under screening errors

    Directory of Open Access Journals (Sweden)

    Ehsan Sharifi

    2015-12-01

    Full Text Available In practice, when a lot size received, an inspection process is necessary to identify the defective items. In addition, the inspection process itself is not error-free and it may contain misclassification errors. In this paper, an economic order quantity model for imperfect quality items with partial backordering under screening errors is studied. The objective is to maximize the expected annual profit by optimizing the order size and the maximum number of backorder units. Also, the aim of this paper is to develop a general and practical model that is more realistic in the competitive commercial situations. For authenticity of the developed model, a case study and a numerical example are illustrated, and the sensitivity analysis is also carried out.

  8. VARYING COEFFICIENT MODELS FOR DATA WITH AUTO-CORRELATED ERROR PROCESS.

    Science.gov (United States)

    Chen, Zhao; Li, Runze; Li, Yan

    2015-04-01

    Varying coefficient model has been popular in the literature. In this paper, we propose a profile least squares estimation procedure to its regression coefficients when its random error is an auto-regressive (AR) process. We further study the asymptotic properties of the proposed procedure, and establish the asymptotic normality for the resulting estimate. We show that the resulting estimate for the regression coefficients has the same asymptotic bias and variance as the local linear estimate for varying coefficient models with independent and identically distributed observations. We apply the SCAD variable selection procedure (Fan and Li, 2001) to reduce model complexity of the AR error process. Numerical comparison and finite sample performance of the resulting estimate are examined by Monte Carlo studies. Our simulation results demonstrate the proposed procedure is much more efficient than the one ignoring the error correlation. The proposed methodology is illustrated by a real data example.

  9. The mean error estimation of TOPSIS method using a fuzzy reference models

    Directory of Open Access Journals (Sweden)

    Wojciech Sałabun

    2013-04-01

    Full Text Available The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS is a commonly used multi-criteria decision-making method. A number of authors have proposed improvements, known as extensions, of the TOPSIS method, but these extensions have not been examined with respect to accuracy. Accuracy estimation is very difficult because reference values for the obtained results are not known, therefore, the results of each extension are compared to one another. In this paper, the author propose a new method to estimate the mean error of TOPSIS with the use of a fuzzy reference model (FRM. This method provides reference values. In experiments involving 1,000 models, 28 million cases are simulated to estimate the mean error. Results of four commonly used normalization procedures were compared. Additionally, the author demonstrated the relationship between the value of the mean error and the nonlinearity of models and a number of alternatives.

  10. Biases in atmospheric CO2 estimates from correlated meteorology modeling errors

    Science.gov (United States)

    Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.

    2015-03-01

    Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is

  11. An Updating Method for Structural Dynamics Models with Uncertainties

    Directory of Open Access Journals (Sweden)

    B. Faverjon

    2008-01-01

    Full Text Available One challenge in the numerical simulation of industrial structures is model validation based on experimental data. Among the indirect or parametric methods available, one is based on the “mechanical” concept of constitutive relation error estimator introduced in order to quantify the quality of finite element analyses. In the case of uncertain measurements obtained from a family of quasi-identical structures, parameters need to be modeled randomly. In this paper, we consider the case of a damped structure modeled with stochastic variables. Polynomial chaos expansion and reduced bases are used to solve the stochastic problems involved in the calculation of the error.

  12. Heteroscedasticity and/or Autocorrelation Checks in Longitudinal Nonlinear Models with Elliptical and AR(1)Errors

    Institute of Scientific and Technical Information of China (English)

    Chun-Zheng CAO; Jin-Guan LIN

    2012-01-01

    The aim of this paper is to study the tests for variance heterogeneity and/or autocorrelation in nonlinear regression models with elliptical and AR(1) errors.The elliptical class includes several symmetric multivariate distributions such as normal,Student-t,power exponential,among others.Several diagnostic tests using score statistics and their adjustment are constructed.The asymptotic properties,including asymptotic chi-square and approximate powers under local alternatives of the score statistics,are studied.The properties of test statistics are investigated through Monte Carlo simulations.A data set previously analyzed under normal errors is reanalyzed under elliptical models to illustrate our test methods.

  13. Estimating numerical errors due to operator splitting in global atmospheric chemistry models: Transport and chemistry

    Science.gov (United States)

    Santillana, Mauricio; Zhang, Lin; Yantosca, Robert

    2016-01-01

    We present upper bounds for the numerical errors introduced when using operator splitting methods to integrate transport and non-linear chemistry processes in global chemical transport models (CTM). We show that (a) operator splitting strategies that evaluate the stiff non-linear chemistry operator at the end of the time step are more accurate, and (b) the results of numerical simulations that use different operator splitting strategies differ by at most 10%, in a prototype one-dimensional non-linear chemistry-transport model. We find similar upper bounds in operator splitting numerical errors in global CTM simulations.

  14. A Two-Warehouse Inventory Model with Imperfect Quality and Inspection Errors

    Directory of Open Access Journals (Sweden)

    Tie Wang

    2012-09-01

    Full Text Available In this study, we establish a new inventory model with two warehouses, imperfect quality and inspection errors simultaneously. The mathematical model by maximizing the annual total profit and the solution procedure are developed. As a byproduct, we correct some technical error in developing the optimal ordering policies in the above two papers. Morevoer, we find a mild condition satisfied by most common distributions to make the ETPU(y concavity. The Proposition 1 is used to determine the optimal solution of ETPU(y.

  15. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

    Science.gov (United States)

    Yang, Yantao; Cui, Junzhi; Han, Tiansi

    2016-08-01

    Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

  16. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  17. Design considerations for case series models with exposure onset measurement error.

    Science.gov (United States)

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model.

  18. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    Science.gov (United States)

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  19. ResQ: An Approach to Unified Estimation of B-Factor and Residue-Specific Error in Protein Structure Prediction.

    Science.gov (United States)

    Yang, Jianyi; Wang, Yan; Zhang, Yang

    2016-02-22

    Computer-based structure prediction becomes a major tool to provide large-scale structure models for annotating biological function of proteins. Information of residue-level accuracy and thermal mobility (or B-factor), which is critical to decide how biologists utilize the predicted models, is however missed in most structure prediction pipelines. We developed ResQ for unified residue-level model quality and B-factor estimations by combining local structure assembly variations with sequence-based and structure-based profiling. ResQ was tested on 635 non-redundant proteins with structure models generated by I-TASSER, where the average difference between estimated and observed distance errors is 1.4Å for the confidently modeled proteins. ResQ was further tested on structure decoys from CASP9-11 experiments, where the error of local structure quality prediction is consistently lower than or comparable to other state-of-the-art predictors. Finally, ResQ B-factor profile was used to assist molecular replacement, which resulted in successful solutions on several proteins that could not be solved from constant B-factor settings.

  20. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  1. Behaviour of large scale structures of the electron content as a key parameterfor range errors in GNSS applications

    Directory of Open Access Journals (Sweden)

    L. Ciraolo

    2004-06-01

    Full Text Available The Total Electron Content (TEC of the ionosphere is a key parameter for describing the ionospheric state. This paper deals with the large scale behaviour of TEC under low and high solar activity conditions. Large scale structures of the plasma density are formed by fundamental ionospheric processes mainly driven by solar radiation input, neutral winds and electric fields. The monitoring of large scale structures contributes to a comprehensive understanding of these coupling mechanisms which are rather complex particularly under perturbed geomagnetic conditions. The paper addresses techniques to monitor TEC with sufficient accuracy of a few TEC units (1016m-2 to measure large scale structures over Europe and over the polar areas. The availability of GPS data from global GPS receiver networks as e.g., those from the International GPS Service (IGS is dense enough to generate TEC maps on a continuous base. A model assisted technique is briefly described for mapping TEC over the European and polar areas. A statistical estimation of horizontal TEC gradients reveals large scale gradients of up to about 6 TECU/1000 km under high solar activity conditions at an occurrence probability level of about 1%. Occasionally, during severe ionospheric storms this value may increase by a factor of 10 or even more. A close correlation of large scale gradients and the geomagnetic activity has been found giving the chance to forecast TEC gradient amplitudes by using predicted geomagnetic indices. Since TEC is proportional to first-order range errors in Global Satellite Navigation Systems (GNSS such as the US GPS and the Russian GLONASS the study of the behaviour of this parameter has a practical meaning in GNSS based navigation and positioning. The paper addresses the close relationship between TEC and ranging errors in GNSS. Having in view Galileo, the planned Europe?s own global satellite navigation system, some aspects related to the mitigation of ionospheric

  2. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Directory of Open Access Journals (Sweden)

    Jae Joon Hwang

    Full Text Available Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT, evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23% by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  3. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. An effective approach for the minimization of errors in capacitance-voltage carrier profiling of quantum structures

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Dipankar, E-mail: diiibiswas@yahoo.co.in; Panda, Siddhartha [Institute of Radiophysics and Electronics, University of Calcutta, 92 A. P. C. Road, Kolkata 700009 (India)

    2014-04-07

    Experimental capacitance–voltage (C-V) profiling of semiconductor heterojunctions and quantum wells has remained ever important and relevant. The apparent carrier distributions (ACDs) thus obtained reveal the carrier depletions, carrier peaks and their positions, in and around the quantum structures. Inevitable errors, encountered in such measurements, are the deviations of the peak concentrations of the ACDs and their positions, from the actual carrier peaks obtained from quantum mechanical computations with the fundamental parameters. In spite of the very wide use of the C-V method, comprehensive discussions on the qualitative and quantitative nature of the errors remain wanting. The errors are dependent on the fundamental parameters, the temperature of measurements, the Debye length, and the series resistance. In this paper, the errors have been studied with doping concentration, band offset, and temperature. From this study, a rough estimate may be drawn about the error. It is seen that the error in the position of the ACD peak decreases at higher doping, higher band offset, and lower temperature, whereas the error in the peak concentration changes in a strange fashion. A completely new method is introduced, for derivation of the carrier profiles from C-V measurements on quantum structures to minimize errors which are inevitable in the conventional formulation.

  5. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  6. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  7. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  8. Structural dynamic modifications via models

    Indian Academy of Sciences (India)

    T K Kundra

    2000-06-01

    Structural dynamic modification techniques attempt to reduce dynamic design time and can be implemented beginning with spatial models of structures, dynamic test data or updated models. The models assumed in this discussion are mathematical models, namely mass, stiffness, and damping matrices of the equations of motion of a structure. These models are identified/extracted from dynamic test data viz. frequency response functions (FRFs). Alternatively these models could have been obtained by adjusting or updating the finite element model of the structure in the light of the test data. The methods of structural modification for getting desired dynamic characteristics by using modifiers namely mass, beams and tuned absorbers are discussed.

  9. a Dynamically Interactive Column Physics Model Suitable for Diagnosing Regional Climate Variability and GCM Errors

    Science.gov (United States)

    Sardeshmukh, P.

    2002-12-01

    Climate diagnosis increasingly involves understanding the evolution of relatively weak signals in a high-order chaotic system of strongly interacting components. Assessing the sensitivity of the full system to perturbations (or errors) is problematic: forward "brute force" calculations are too expensive, and backward "adjoint" calculations assume linearity. Fortunately, for many specific problems it is not necessary to consider the full complexity of climate interactions; diagnosis with simpler models can be very useful. The simplification usually involves restricting the focus to component subsystems (atmosphere, ocean, land, cryosphere etc) or using models that emphasize "dynamics" over "physics" or vice versa. In growing recognition of climate sensitivity to the details of atmospheric physics, attention is increasingly turning toward diagnostic models with complex physics and simplified dynamics. Single column models (SCMs) that consider complex diabatic interactions within a single atmospheric column are the best and most extreme examples of these, and the focus of this study. For regional studies, the use of SCMs is problematic because, in part, advection by the large-scale circulation is typically decoupled from diabatic interactions. This decoupling, while apparently necessary to make an SCM workable at all, can nevertheless lead to rapid spurious error growth in SCM experiments, especially in the tropics. An SCM framework that couples the vertical advective tendencies to the column physics is developed here. Conceptually, the column is viewed as being embedded in a region of uniform background winds, temperature and humidity, which allows all fluctuating advection terms to be specified in terms of vertical velocity, temperature, and humidity. The vertical velocity at any instant is given by a formula that links the vertical temperature advection to the history of the SCM-generated diabatic heating rates up to that instant. The parameters in this coupling

  10. Research on Time-series Modeling and Filtering Methods for MEMS Gyroscope Random Drift Error

    Science.gov (United States)

    Wang, Xiao Yi; Meng, Xiu Yun

    2017-03-01

    The precision of MEMS gyroscope is reduced by random drift error. This paper applied time series analysis to model random drift error of MEMS gyroscope. Based on the model established, Kalman filter was employed to compensate for the error. To overcome the disadvantages of conventional Kalman filter, Sage-Husa adaptive filtering algorithm was utilized to improve the accuracy of filtering results and the orthogonal property of innovation in the process of filtering was utilized to deal with outliers. The results showed that, compared with conventional Kalman filter, the modified filter can not only enhance filter accuracy, but also resist to outliers and this assured the stability of filtering thus improving the performance of gyroscopes.

  11. Phase errors elimination in compact digital holoscope (CDH) based on a reasonable mathematical model

    Science.gov (United States)

    Wen, Yongfu; Qu, Weijuan; Cheng, Cheeyuen; Wang, Zhaomin; Asundi, Anand

    2015-03-01

    In the compact digital holoscope (CDH) measurement process, theoretically, we need to ensure the distances between the reference wave and object wave to the hologram plane exactly match. However, it is not easy to realize in practice due to the human factors. This can lead to a phase error in the reconstruction result. In this paper, the strict theoretical analysis of the wavefront interference is performed to demonstrate the mathematical model of the phase error and then a phase errors elimination method is proposed based on the advanced mathematical model, which has a more explicit physical meaning. Experiments are carried out to verify the performance of the presented method and the results indicate that it is effective and allows the operator can make operation more flexible.

  12. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  13. The study about forming high-precision optical lens minimalized sinuous error structures for designed surface

    Science.gov (United States)

    Katahira, Yu; Fukuta, Masahiko; Katsuki, Masahide; Momochi, Takeshi; Yamamoto, Yoshihiro

    2016-09-01

    Recently, it has been required to improve qualities of aspherical lenses mounted on camera units. Optical lenses in highvolume production generally are applied with molding process using cemented carbide or Ni-P coated steel, which can be selected from lens material such as glass and plastic. Additionally it can be obtained high quality of the cut or ground surface on mold due to developments of different mold product technologies. As results, it can be less than 100nmPV as form-error and 1nmRa as surface roughness in molds. Furthermore it comes to need higher quality, not only formerror( PV) and surface roughness(Ra) but also other surface characteristics. For instance, it can be caused distorted shapes at imaging by middle spatial frequency undulations on the lens surface. In this study, we made focus on several types of sinuous structures, which can be classified into form errors for designed surface and deteriorate optical system performances. And it was obtained mold product processes minimalizing undulations on the surface. In the report, it was mentioned about the analyzing process by using PSD so as to evaluate micro undulations on the machined surface quantitatively. In addition, it was mentioned that the grinding process with circumferential velocity control was effective for large aperture lenses fabrication and could minimalize undulations appeared on outer area of the machined surface, and mentioned about the optical glass lens molding process by using the high precision press machine.

  14. Error modeling, sensitivity analysis and assembly process of a class of 3-DOF parallel kinematic machines with parallelogram struts

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents an error modeling methodology that enables the tolerance design, assembly and kinematic calibration of a class of 3-DOF parallel kinematic machines with parallelogram struts to be integrated into a unified framework. The error mapping function is formulated to identify the source errors affecting the uncompensable pose error. The sensitivity analysis in the sense of statistics is also carried out to investigate the influences of source errors on the pose accuracy. An assembly process that can effectively minimize the uncompensable pose error is proposed as one of the results of this investigation.

  15. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    Science.gov (United States)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  16. Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan

    2017-01-01

    Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  17. Role of forcing uncertainty and background model error characterization in snow data assimilation

    Directory of Open Access Journals (Sweden)

    S. V. Kumar

    2017-06-01

    Full Text Available Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  18. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  19. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  20. A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems

    Directory of Open Access Journals (Sweden)

    Fabio Dovis

    2013-07-01

    Full Text Available Advances in the development of micro-electromechanical systems (MEMS have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS and the inertial navigation system (INS integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs, stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV and the power spectral density (PSD techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade presents error sources with short-term (high-frequency and long-term (low-frequency components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.

  1. Dynamically constrained uncertainty for the Kalman filter covariance in the presence of model error

    Science.gov (United States)

    Grudzien, Colin; Carrassi, Alberto; Bocquet, Marc

    2017-04-01

    The forecasting community has long understood the impact of dynamic instability on the uncertainty of predictions in physical systems and this has led to innovative filtering design to take advantage of the knowledge of process models. The advantages of this combined approach to filtering, including both a dynamic and statistical understanding, have included dimensional reductions and robust feature selection in the observational design of filters. In the context of a perfect models we have shown that the uncertainty in prediction is damped along the directions of stability and the support of the uncertainty conforms to the dominant system instabilities. Our current work likewise demonstrates this constraint on the uncertainty for systems with model error, specifically, - we produce analytical upper bounds on the uncertainty in the stable, backwards orthogonal Lyapunov vectors in terms of the local Lyapunov exponents and the scale of the additive noise. - we demonstrate that for systems with model noise, the least upper bound on the uncertainty depends on the inverse relationship of the leading Lyapunov exponent and the observational certainty. - we numerically compute the invariant scaling factor of the model error which determines the asymptotic uncertainty. This dynamic scaling of model error is identifiable independently of the noise and is computable directly in terms of the system's dynamic invariants -- in this way the physical process itself may mollify the growth of modelling errors. For systems with strongly dissipative behaviour, we demonstrate that the growth of the uncertainty can be confined to the unstable-neutral modes independently of the filtering process, and we connect the observational design to take advantage of a dynamic characteristic of the filtering error.

  2. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  3. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    Science.gov (United States)

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  4. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    Science.gov (United States)

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  5. A three-component model of the control error in manual tracking of continuous random signals.

    Science.gov (United States)

    Gerisch, Hans; Staude, Gerhard; Wolf, Werner; Bauch, Gerhard

    2013-10-01

    The performance of human operators acting within closed-loop control systems is investigated in a classic tracking task. The dependence of the control error (tracking error) on the parameters display gain, k(display), and input signal frequency bandwidth, f(g), which alter task difficulty and presumably the control delay, is studied with the aim of functionally specifying it via a model. The human operator as an element of a cascaded human-machine control system (e.g., car driving or piloting an airplane) codetermines the overall system performance. Control performance of humans in continuous tracking has been described in earlier studies. Using a handheld joystick, 10 participants tracked continuous random input signals. The parameters f(g) and k(display) were altered between experiments. Increased task difficulty promoted lengthened control delay and, consequently, increased control error.Tracking performance degraded profoundly with target deflection components above 1 Hz, confirming earlier reports. The control error is composed of a delay-induced component, a demand-based component, and a novel component: a human tracking limit. Accordingly, a new model that allows concepts of the observed control error to be split into these three components is suggested. To achieve optimal performance in control systems that include a human operator (e.g., vehicles, remote controlled rovers, crane control), (a) tasks should be kept as simple as possible to achieve shortest control delays, and (b) task components requiring higher-frequency (> 1 Hz) tracking actions should be avoided or automated by technical systems.

  6. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  7. The localization and correction of errors in models: a constraint-based approach

    OpenAIRE

    Piechowiak, S.; Rodriguez, J

    2005-01-01

    Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.

  8. Relative Efficiency of Maximum Likelihood and Other Estimators in a Nonlinear Regression Model with Small Measurement Errors

    OpenAIRE

    Kukush, Alexander; Schneeweiss, Hans

    2004-01-01

    We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.

  9. What Kind of Initial Errors Cause the Severest Prediction Uncertainty of EI Nino in Zebiak-Cane Model

    Institute of Scientific and Technical Information of China (English)

    XU Hui; DUAN Wansuo

    2008-01-01

    With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cause the largest prediction error of ENSO in the ZC model. By analyzing the behavior of CNOP- type errors, we find that for the normal states and the relatively weak EI Nino events in the ZC model, the predictions tend to yield false alarms due to the uncertainties caused by CNOP. For the relatively strong EI Nino events, the ZC model largely underestimates their intensities. Also, our results suggest that the error growth of EI Nino in the ZC model depends on the phases of both the annual cycle and ENSO. The condition during northern spring and summer is most favorable for the error growth. The ENSO prediction bestriding these two seasons may be the most difficult. A linear singular vector (LSV) approach is also used to estimate the error growth of ENSO, but it underestimates the prediction uncertainties of ENSO in the ZC model. This result indicates that the different initial errors cause different amplitudes of prediction errors though they have same magnitudes. CNOP yields the severest prediction uncertainty. That is to say, the prediction skill of ENSO is closely related to the types of initial error. This finding illustrates a theoretical basis of data assimilation. It is expected that a data assimilation method can filter the initial errors related to CNOP and improve the ENSO forecast skill.

  10. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    Science.gov (United States)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  11. Integrated materials–structural models

    DEFF Research Database (Denmark)

    Stang, Henrik; Geiker, Mette Rica

    2008-01-01

    Reliable service life models for load carrying structures are significant elements in the evaluation of the performance and sustainability of existing and new structures. Furthermore, reliable service life models are prerequisites for the evaluation of the sustainability of maintenance strategies......, repair works and strengthening methods for structures. A very significant part of the infrastructure consists of reinforced concrete structures. Even though reinforced concrete structures typically are very competitive, certain concrete structures suffer from various types of degradation. A framework...

  12. Filter design for failure detection and isolation in the presence of modeling errors and disturbances

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1996-01-01

    The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based...

  13. Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors

    Institute of Scientific and Technical Information of China (English)

    Jun FAN

    2012-01-01

    In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.

  14. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  15. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  16. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  17. TESTING SPHERICITY IN A GMANOVA-MANOVA MODEL WITH NORMAL ERROR

    Institute of Scientific and Technical Information of China (English)

    Bai Peng; Shi Lei

    2008-01-01

    This article presents a statistic for testing the sphericity in a GMANOVA-MANOVA model with normal error. It is shown that the null distribution of this statistic is beta and its nonnull distribution is given in series form of beta distributions.

  18. Sensitivity of subject-specific models to errors in musculo-skeletal geometry

    NARCIS (Netherlands)

    Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph

    2012-01-01

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in

  19. A background error covariance model of significant wave height employing Monte Carlo simulation

    Institute of Scientific and Technical Information of China (English)

    GUO Yanyou; HOU Yijun; ZHANG Chunmei; YANG Jie

    2012-01-01

    The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model.The background error covariance(BEC)of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain.However,error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition.In this paper,we simulated the BEC of the significant wave height(SWH)employing Monte Carlo methods.An interesting result is that the BEC varies consistently with the mean wave direction(MWD).In the model domain,the BEC of the SWH decreases significantly when the MWD changes abruptly.A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed.A case study of regional data assimilation was performed,where the SWH observations of buoy 22001 were used to assess the SWH hindcast.The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.

  20. Sensitivity of subject-specific models to errors in musculo-skeletal geometry

    NARCIS (Netherlands)

    Carbone, V.; Krogt, van der M.M.; Koopman, H.F.J.M.; Verdonschot, N.J.J.

    2012-01-01

    Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-