WorldWideScience

Sample records for modeling multiplicative error

  1. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    differences. We develop some properties of this type of model and show that in several cases the change from an additive to a multiplicative formulation, maintaining a specification of V, may lead to a large improvement in fit, sometimes larger than that gained from introducing random coefficients in V....

  2. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  3. ON ASYMPTOTIC NORMALITY OF PARAMETERS IN MULTIPLE LINEAR ERRORS-IN-VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sanguo; CHEN Xiru

    2003-01-01

    This paper studies the parameter estimation of multiple dimensional linear errors-in-variables (EV) models in the case where replicated observations are available in some experimental points. Asymptotic normality is established under mild conditions, and the parameters entering the asymptotic variance are consistently estimated to render the result useable in the construction of large-sample confidence regions.

  4. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  5. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    Science.gov (United States)

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  6. The Dynamic Modeling of Multiple Pairs of Spur Gears in Mesh, Including Friction and Geometrical Errors

    Directory of Open Access Journals (Sweden)

    Shengxiang Jia

    2003-01-01

    Full Text Available This article presents a dynamic model of three shafts and two pair of gears in mesh, with 26 degrees of freedom, including the effects of variable tooth stiffness, pitch and profile errors, friction, and a localized tooth crack on one of the gears. The article also details howgeometrical errors in teeth can be included in a model. The model incorporates the effects of variations in torsional mesh stiffness in gear teeth by using a common formula to describe stiffness that occurs as the gears mesh together. The comparison between the presence and absence of geometrical errors in teeth was made by using Matlab and Simulink models, which were developed from the equations of motion. The effects of pitch and profile errors on the resultant input pinion angular velocity coherent-signal of the input pinion's average are discussed by investigating some of the common diagnostic functions and changes to the frequency spectra results.

  7. Sensory feedback, error correction, and remapping in a multiple oscillator model of place cell activity

    Directory of Open Access Journals (Sweden)

    Joseph D. Monaco

    2011-09-01

    Full Text Available Mammals navigate by integrating self-motion signals (‘path integration’ and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6–10 Hz rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of ‘partial remapping’ responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.

  8. Neutron multiplication error in TRU waste measurements

    Energy Technology Data Exchange (ETDEWEB)

    Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  9. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  10. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  11. Systematic error mitigation in multiple field astrometry

    CERN Document Server

    Gai, Mario

    2011-01-01

    Combination of more than two fields provides constraints on the systematic error of simultaneous observations. The concept is investigated in the context of the Gravitation Astrometric Measurement Experiment (GAME), which aims at measurement of the PPN parameter $\\gamma$ at the $10^{-7}-10^{-8}$ level. Robust self-calibration and control of systematic error is crucial to the achievement of the precision goal. The present work is focused on the concept investigation and practical implementation strategy of systematic error control over four simultaneously observed fields, implementing a "double differential" measurement technique. Some basic requirements on geometry, observing and calibration strategy are derived, discussing the fundamental characteristics of the proposed concept.

  12. An Expert System for Diagnosing Children's Multiplication Errors.

    Science.gov (United States)

    Attisha, M.; Yazdani, M.

    1984-01-01

    Describes a microcomputer-based system for diagnosing children's multiplication errors which incorporates the knowledge base of all known systematic multiplication errors, and utilizes a modular approach to cope with the program's complexity. Each module's function, how the programs interact, and the design of pupil-machine interaction are…

  13. Validation and Error in Multiplicative Utility Functions

    Science.gov (United States)

    1978-12-01

    decision analysis with multiple objectives: the Mexico City Airport. Bell Jo’irnal of Economics and Management Science, 1973,4, 101-117...94135 Department of Government and Politica Univeraity of Maryland Attantion: Dr. Davia B. Bobrow College Park, MD 20747 Department of Paychology

  14. The method of translation additive and multiplicative error in the instrumental component of the measurement uncertainty

    Science.gov (United States)

    Vasilevskyi, Olexander M.; Kucheruk, Volodymyr Y.; Bogachuk, Volodymyr V.; Gromaszek, Konrad; Wójcik, Waldemar; Smailova, Saule; Askarova, Nursanat

    2016-09-01

    The paper proposes a method of conversion additive and multiplicative errors, mathematical models are obtained by a Taylor expansion of the transformation equations used measuring instruments in the instrumental component of the measurement uncertainty.

  15. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  16. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  17. Impact of channel estimation error on channel capacity of multiple input multiple output system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In order to investigate the impact of channel estimation error on channel capacity of multiple input multiple output (MIMO) system, a novel method is proposed to explore the channel capacity in correlated Rayleigh fading environment. A system model is constructed based on the channel estimation error at receiver side. Using the properties of Wishart distribution, the lower bound of the channel capacity is derived when the MIMO channel is of full rank. Then a method is proposed to select the optimum set of transmit antennas based on the lower bound of the mean channel capacity. The novel method can be easily implemented with low computational complexity. The simulation results show that the channel capacity of MIMO system is sensitive to channel estimation error, and is maximized when the signal-to-noise ratio increases to a certain point. Proper selection of transmit antennas can increase the channel capacity of MIMO system by about 1 bit/s in a flat fading environment with deficient rank of channel matrix.

  18. Propagation error minimization method for multiple structural displacement monitoring system

    Science.gov (United States)

    Jeon, Haemin; Shin, Jae-Uk; Myung, Hyun

    2013-04-01

    In the previous study, a visually servoed paired structured light system (ViSP) which is composed of two sides facing each other, each with one or two lasers, a 2-DOF manipulator, a camera, and a screen has been proposed. The lasers project their parallel beams to the screen on the opposite side and 6-DOF relative displacement between two sides is estimated by calculating positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive civil structures such as long-span bridges or high-rise buildings, the whole area should be divided into multiple partitions and each ViSP module is placed in each partition in a cascaded manner. In other words, the movement of the entire structure can be monitored by multiplying the estimated displacements from multiple ViSP modules. In the multiplication, however, there is a major problem that the displacement estimation error is propagated throughout the multiple modules. To solve the problem, propagation error minimization method (PEMM) which uses Newton-Raphson formulation inspired by the error back-propagation algorithm is proposed. In this method, a propagation error at the last module is calculated and then the estimated displacement from ViSP at each partition is updated in reverse order by using the proposed PEMM that minimizes the propagation error. To verify the performance of the proposed method, various simulations and experimental tests have been performed. The results show that the propagation error is significantly reduced after applying PEMM.

  19. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  20. Error-Resilient Triple-Watermarking with Multiple Description Coding

    Directory of Open Access Journals (Sweden)

    Shu-Chuan Chu

    2010-03-01

    Full Text Available Watermarking is one useful solution for digital rights management (DRM systems, and it is a popular research topic in the last decade. In this paper, besides the inherent behavior to conquer against intentional or unintentional attacks for watermarking, we not only watch the survivability of embedded watermark, but also focus on retaining the watermarked image quality with the aid of multiple description coding (MDC. MDC is a technique for error resilient coding, suitable for transmitting compressed data over multiple channels. In this paper, we propose a new algorithm on vector quantization (VQ based image watermarking, which is suitable for error-resilient transmission. By incorporating watermarking with MDC, the scheme we proposed for embedding three watermarks can effectively overcome channel impairments while retaining the capability for ownership protection. With the promising simulation results presented, we can demonstrate the utility and practicability of our algorithm.

  1. Error handling strategies in multiphase inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  2. Stepwise multiple test procedures and control of directional errors

    OpenAIRE

    1999-01-01

    One of the most difficult problems occurring with stepwise multiple test procedures for a set of two-sided hypotheses is the control of direc-tional errors if rejection of a hypothesis is accomplished with a directional decision. In this paper we generalize a result for so-called step-down procedures derived by Shaffer to a large class of stepwise or closed multiple test procedures. In a unifying way we obtain results for a large class of order statistics procedures includin...

  3. Error Control of Iterative Linear Solvers for Integrated Groundwater Models

    CERN Document Server

    Dixon, Matthew; Brush, Charles; Chung, Francis; Dogrul, Emin; Kadir, Tariq

    2010-01-01

    An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient (PCG) method or Generalized Minimum RESidual method (GMRES) is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of 'forward error bound estimation' to show how rescaling the linear system affects the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed using the USGS GSFLOW package and the California State Department of Water Resources' Integrated Water Flow Model (IWFM), we observe that this error bound guides the choice of a prac...

  4. Multiplicity Control in Structural Equation Modeling

    Science.gov (United States)

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  5. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  6. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  7. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  8. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  9. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  10. Analysis of modeling errors in system identification

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.

  11. Generalization error bounds for stationary autoregressive models

    CERN Document Server

    McDonald, Daniel J; Schervish, Mark

    2011-01-01

    We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that the stationarity assumption alone lets us treat the estimation of AR models as a regularized kernel regression without the need to further regularize the model arbitrarily. We thereby bound the Rademacher complexity of AR models and apply existing Rademacher complexity results to characterize the predictive risk of AR models. We demonstrate our methods by predicting interest rate movements.

  12. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...... can be achieved by a Multiple-Input Multiple- Output(MIMO) whitening filter followed by a traditional BCJR algorithm. The Gauss-Markov noise model provides a reasonable approximation for co-channel interference, making it an interesting single-user detector for many multiuser communication systems...

  13. Spatial Error Metrics for Oceanographic Model Verification

    Science.gov (United States)

    2012-02-01

    quantitatively and qualitatively for this oceano - graphic data and successfully separates the model error into displacement and intensity components. This... oceano - graphic models as well, though one would likely need to make special modifications to handle the often-used nonuniform spacing between depth layers

  14. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  15. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  16. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    Science.gov (United States)

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  17. VQ-based model for binary error process

    Science.gov (United States)

    Csóka, Tibor; Polec, Jaroslav; Csóka, Filip; Kotuliaková, Kvetoslava

    2017-05-01

    A variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (eg in the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson's goodness of fit test of all model variants' output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot's model.

  18. A probabilistic model for reducing medication errors.

    Directory of Open Access Journals (Sweden)

    Phung Anh Nguyen

    Full Text Available BACKGROUND: Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. METHODS AND FINDINGS: Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan's National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM and Medication-Medication (MM associations were computed by their co-occurrence and associations' strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. CONCLUSIONS: We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients' safety and the overall quality of care.

  19. Regression Model With Elliptically Contoured Errors

    CERN Document Server

    Arashi, M; Tabatabaey, S M M

    2012-01-01

    For the regression model where the errors follow the elliptically contoured distribution (ECD), we consider the least squares (LS), restricted LS (RLS), preliminary test (PT), Stein-type shrinkage (S) and positive-rule shrinkage (PRS) estimators for the regression parameters. We compare the quadratic risks of the estimators to determine the relative dominance properties of the five estimators.

  20. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  1. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  2. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  3. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...

  4. Error propagation in energetic carrying capacity models

    Science.gov (United States)

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  5. A Probabilistic Model for Reducing Medication Errors

    Science.gov (United States)

    Nguyen, Phung Anh; Syed-Abdul, Shabbir; Iqbal, Usman; Hsu, Min-Huei; Huang, Chen-Ling; Li, Hsien-Chang; Clinciu, Daniel Livius; Jian, Wen-Shan; Li, Yu-Chuan Jack

    2013-01-01

    Background Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. Methods and Finding(s) Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan’s National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM) and Medication-Medication (MM) associations were computed by their co-occurrence and associations’ strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. Conclusions We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients’ safety and the overall quality of care. PMID:24312659

  6. Biomedical model fitting and error analysis.

    Science.gov (United States)

    Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri

    2011-09-20

    This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.

  7. Automatic Registration and Error Detection of Multiple Slices Using Landmarks

    Directory of Open Access Journals (Sweden)

    Hans Frimmel

    2001-01-01

    Full Text Available Objectives. When analysing the 3D structure of tissue, serial sectioning and staining of the resulting slices is sometimes the preferred option. This leads to severe registration problems. In this paper, a method for automatic registration and error detection of slices using landmark needles has been developed. A cost function takes some parameters from the current state of the problem to be solved as input and gives a quality of the current solution as output. The cost function used in this paper, is based on a model of the slices and the landmark needles. The method has been used to register slices of prostates in order to create 3D computer models. Manual registration of the same prostates has been undertaken and compared with the results from the algorithm. Methods. Prostates from sixteen men who underwent radical prostatectomy were formalin fixed with landmark needles, sliced and the slices were computer reconstructed. The cost function takes rotation and translation for each prostate slice, as well as slope and offset for each landmark needle as input. The current quality of fit of the model, using the input parameters given, is returned. The function takes the built‐in instability of the model into account. The method uses a standard algorithm to optimize the prostate slice positions. To verify the result, s standard method in statistics was used. Results. The methods were evaluated for 16 prostates. When testing blindly, a physician could not determine whether the registration shown to him were created by the automated method described in this paper, or manually by an expert, except in one out of 16 cases. Visual inspection and analysis of the outlier confirmed that the input data had been deformed. The automatic detection of erroneous slices marked a few slices, including the outlier, as suspicious. Conclusions. The model based registration performs better than traditional simple slice‐wise registration. In the case of prostate

  8. Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.

  9. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2002-01-01

    [1]Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.[3]Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.[4]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.[5]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.[6]Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.[7]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.

  10. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  11. Error Models of the Analog to Digital Converters

    OpenAIRE

    Michaeli Linus; Šaliga Ján

    2014-01-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by t...

  12. Hybrid Models for Trajectory Error Modelling in Urban Environments

    Science.gov (United States)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  13. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  14. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, D.

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  15. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  16. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  17. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    Science.gov (United States)

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  18. Methods and apparatus using commutative error detection values for fault isolation in multiple node computers

    Science.gov (United States)

    Almasi, Gheorghe [Ardsley, NY; Blumrich, Matthias Augustin [Ridgefield, CT; Chen, Dong [Croton-On-Hudson, NY; Coteus, Paul [Yorktown, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk I [Ossining, NY; Singh, Sarabjeet [Mississauga, CA; Steinmacher-Burow, Burkhard D [Wernau, DE; Takken, Todd [Brewster, NY; Vranas, Pavlos [Bedford Hills, NY

    2008-06-03

    Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored in memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.

  19. Sampling errors in rainfall estimates by multiple satellites

    Science.gov (United States)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-01-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  20. Multiple Overimputation to Address Missing Data and Measurement Error: Application to HIV Treatment During Pregnancy and Pregnancy Outcomes.

    Science.gov (United States)

    Bengtson, Angela M; Westreich, Daniel; Musonda, Patrick; Pettifor, Audrey; Chibwesha, Carla; Chi, Benjamin H; Vwalika, Bellington; Pence, Brian W; Stringer, Jeffrey S A; Miller, William C

    2016-09-01

    Investigations of the association of combination antiretroviral therapy (ART) with pregnancy outcomes often rely on routinely collected clinical data, which are prone to missing data and measurement error. Measurement error in gestational age may bias the relation between combination ART and gestational age-based outcomes. We demonstrate the use of multiple overimputation to address missing data and measurement error in gestational age. Using routinely collected clinical data from public health facilities in Lusaka, Zambia, we multiply imputed missing data and multiply overimputed observed values of gestational age. Poisson models with robust variance estimators were used to estimate risk ratios (RRs) for the associations of duration of combination ART with small for gestational age (SGA) and preterm birth. We compared results from a complete-case analysis, using multiple imputation to address missing data only and using multiple overimputation to address missing data and measurement error. In the complete-case analysis, there was no evidence of an association between duration of combination ART and SGA or preterm birth. When we performed multiple overimputation, RRs for SGA moved past the null, but remained imprecise. For preterm birth, RRs for 9-32 weeks of combination ART moved away from the null as the variance due to measurement error increased. When we used multiple overimputation to account for measurement error and missing data, we observed an increased risk of preterm birth with longer duration of combination ART. Future analyses examining associations between combination ART and pregnancy outcomes should consider using multiple overimputation to address measurement error in gestational age.

  1. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  2. Cognitive modelling of pilot errors and error recovery in flight management tasks

    NARCIS (Netherlands)

    Lüdtke, A.; Osterloh, J.P.; Mioch, T.; Rister, F.; Looije, R.

    2009-01-01

    This paper presents a cognitive modelling approach to predict pilot errors and error recovery during the interaction with aircraft cockpit systems. The model allows execution of flight procedures in a virtual simulation environment and production of simulation traces. We present traces for the inter

  3. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  4. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  5. Modelling Soft Error Probability in Firmware: A Case Study

    Directory of Open Access Journals (Sweden)

    DG Kourie

    2012-06-01

    Full Text Available This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, based on two possible types of disruptive events: a single spike of interference, and a burst of multiple spikes of interference.The models suggest that the system conforms to the IEC 61508 Safety Integrity Levels, even under very conservative assumptions of operation.The case study serves as a platform for future researchers to build on when probabilistic modelling soft errors in other contexts.

  6. Probe Error Modeling Research Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    Wu Huaiqiang; Xing Zilong; Zhang Jian; Yan Yan

    2015-01-01

    Probe calibration is carried out under specific conditions; most of the error caused by the change of speed parameter has not been corrected. In order to reduce the measuring error influence on measurement accuracy, this article analyzes the relationship between speed parameter and probe error, and use Bayesian network to establish the model of probe error. Model takes account of prior knowledge and sample data, with the updating of data, which can reflect the change of the errors of the probe and constantly revised modeling results.

  7. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  8. Error Models of the Analog to Digital Converters

    Science.gov (United States)

    Michaeli, Linus; Šaliga, Ján

    2014-04-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  9. Error Models of the Analog to Digital Converters

    Directory of Open Access Journals (Sweden)

    Michaeli Linus

    2014-04-01

    Full Text Available Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  10. An error assessment of the kriging based approximation model using a mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)

    2006-08-15

    A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.

  11. Error Model of Curves in GIS and Digitization Experiment

    Institute of Scientific and Technical Information of China (English)

    GUO Tongde; WANG Jiayao; WANG Guangxia

    2006-01-01

    A stochastic error process of curves is proposed as the error model to describe the errors of curves in GIS. In terms of the stochastic process, four characteristics concerning the local error of curves, namely, mean error function, standard error function, absolute error function, and the correlation function of errors , are put forward. The total error of a curve is expressed by a mean square integral of the stochastic error process. The probabilistic meanings and geometric meanings of the characteristics mentioned above are also discussed. A scan digitization experiment is designed to check the efficiency of the model. In the experiment, a piece of contour line is digitized for more than 100 times and lots of sample functions are derived from the experiment. Finally, all the error characteristics are estimated on the basis of sample functions. The experiment results show that the systematic error in digitized map data is not negligible, and the errors of points on curves are chiefly dependent on the curvature and the concavity of the curves.

  12. Error rate information in attention allocation pilot models

    Science.gov (United States)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  13. Multiplicative LSTM for sequence modelling

    OpenAIRE

    Krause, Ben; Lu, Liang; Murray, Iain; Renals, Steve

    2016-01-01

    This paper introduces multiplicative LSTM, a novel hybrid recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. Multiplicative LSTM is motivated by its flexibility to have very different recurrent transition functions for each possible input, which we argue helps make it more expressive in autoregressive density estimation. We show empirically that multiplicative LSTM outperforms ...

  14. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  15. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  16. The effect of uncertainty and systematic errors in hydrological modelling

    Science.gov (United States)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  17. The effect of model errors in variational assimilation

    Science.gov (United States)

    Wergen, Werner

    1992-08-01

    A linearized, one-dimensional shallow water model is used to investigate the effect of model errors in four-dimensional variational assimilation. A suitable initialization scheme for variational assimilation is proposed. Introducing deliberate phase speed errors in the model, the results from variational assimilation are compared to standard analysis/forecast cycle experiments. While the latter draws to the data and reflects the model errors only in the datavoid areas, variational assimilation with the model used as strong constraint is shown to distribute the model errors over the entire analysis domain. The implications for verification and diagnostics are discussed. Temporal weighting of the observations can reduce the errors towards the end of the assimilation period, but may deteriorate the subsequent forecasts. An extension to variational assimilation is proposed, which seeks not only to determine the initial state from the observations but also some of the tunable parameters of the model. The potentional usefulness of this approach for parameterization studies and for a separation of forecast errors into model- and analysis errors is discussed. Finally, variational assimilations with the model used as weak constraint are presented. While showing a good performance in the assimilation, forecasts can suffer severely if the extra term in the equations up to which the model is enforced are unable to compensate for the real model error. In the discussion, an overall appraisal of both assimilation methods is given.

  18. NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.

    Science.gov (United States)

    Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen

    2017-04-01

    We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Predictive vegetation modeling for conservation: impact of error propagation from digital elevation data.

    Science.gov (United States)

    Van Niel, Kimberly P; Austin, Mike P

    2007-01-01

    The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.

  20. Multiple Bit Error Tolerant Galois Field Architectures Over GF (2m

    Directory of Open Access Journals (Sweden)

    Mahesh Poolakkaparambil

    2012-06-01

    Full Text Available Radiation induced transient faults like single event upsets (SEU and multiple event upsets (MEU in memories are well researched. As a result of the technology scaling, it is observed that the logic blocks are also vulnerable to malfunctioning when they are deployed in radiation prone environment. However, the current literature is lacking efforts to mitigate such issues in the digital logic circuits when exposed to natural radiation prone environment or when they are subjected to malicious attacks by an eavesdropper using highly energized particles. This may lead to catastrophe in critical applications such as widely used cryptographic hardware. In this paper, novel dynamic error correction architectures, based on the BCH codes, is proposed for correcting multiple errors which makes the circuits robust against radiation induced faults irrespective of the location of the errors. As a benchmark test case, the finite field multiplier circuit is considered as the functional block which can be the target for major attacks. The proposed scheme has the capability to handle stuck-at faults that are also a major cause of failure affecting the overall yield of a nano-CMOS integrated chip. The experimental results show that the proposed dynamic error detection and correction architecture results in 50% reduction in critical path delay by dynamically bypassing the error correction logic when no error is present. The area overhead for the larger multiplier is within 150% which is 33% lower than the TMR and comparable to 130% overhead of single error correcting Hamming and LDPC based techniques.

  1. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  2. Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error.

    Science.gov (United States)

    Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D

    2016-01-01

    Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.

  3. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  4. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  5. Bayesian modeling growth curves for quail assuming skewness in errors

    Directory of Open Access Journals (Sweden)

    Robson Marcelo Rossi

    2014-06-01

    Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.

  6. Correcting biased observation model error in data assimilation

    CERN Document Server

    Harlim, John

    2016-01-01

    While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...

  7. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  8. Bit Error Rate Performance Analysis on Modulation Techniques of Wideband Code Division Multiple Access

    CERN Document Server

    Masud, M A; Rahman, M A

    2010-01-01

    In the beginning of 21st century there has been a dramatic shift in the market dynamics of telecommunication services. The transmission from base station to mobile or downlink transmission using M-ary Quadrature Amplitude modulation (QAM) and Quadrature phase shift keying (QPSK) modulation schemes are considered in Wideband-Code Division Multiple Access (W-CDMA) system. We have done the performance analysis of these modulation techniques when the system is subjected to Additive White Gaussian Noise (AWGN) and multipath Rayleigh fading are considered in the channel. The research has been performed by using MATLAB 7.6 for simulation and evaluation of Bit Error Rate (BER) and Signal-To-Noise Ratio (SNR) for W-CDMA system models. It is shows that the analysis of Quadrature phases shift key and 16-ary Quadrature Amplitude modulations which are being used in wideband code division multiple access system, Therefore, the system could go for more suitable modulation technique to suit the channel quality, thus we can d...

  9. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    Science.gov (United States)

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  10. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  11. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  12. A Morphographemic Model for Error Correction in Nonconcatenative Strings

    CERN Document Server

    Bowden, T; Bowden, Tanya; Kiraz, George Anton

    1995-01-01

    This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.

  13. Error Types in the Approximate System of Arab Students of English: A Multiple Classificatory Taxonomy

    Science.gov (United States)

    Btoosh, Mousa A.

    2011-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Arab students of English as a second language based on a multiple classificatory taxonomy developed for this purpose. The corpus providing the database for the study consists of three parts: (i) short tape-recorded interviews, (ii) translated sentences and…

  14. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  15. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  16. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  17. Filtering multiscale dynamical systems in the presence of model error

    CERN Document Server

    Harlim, John

    2013-01-01

    In this review article, we report two important competing data assimilation schemes that were developed in the past 20 years, discuss the current methods that are operationally used in weather forecasting applications, and point out one major challenge in data assimilation community: "utilize these existing schemes in the presence of model error". The aim of this paper is to provide theoretical guidelines to mitigate model error in practical applications of filtering multiscale dynamical systems with reduced models. This is a prototypical situation in many applications due to limited ability to resolve the smaller scale processes as well as the difficulty to model the interaction across scales. We present simple examples to point out the importance of accounting for model error when the separation of scales are not apparent. These examples also elucidate the necessity of treating model error as a stochastic process in a nontrivial fashion for optimal filtering, in the sense that the mean and covariance estima...

  18. ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    CUI Hengjian

    2005-01-01

    This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.

  19. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  20. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    Directory of Open Access Journals (Sweden)

    Zhong Su

    2015-09-01

    Full Text Available A bell-shaped vibratory angular velocity gyro (BVG, inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement.

  1. Effect Of Oceanic Lithosphere Age Errors On Model Discrimination

    Science.gov (United States)

    DeLaughter, J. E.

    2016-12-01

    The thermal structure of the oceanic lithosphere is the subject of a long-standing controversy. Because the thermal structure varies with age, it governs properties such as heat flow, density, and bathymetry with important implications for plate tectonics. Though bathymetry, geoid, and heat flow for young (geoid, and heat flow data to an inverse model to determine lithospheric structure details. Though inverse models usually include the effect of errors in bathymetry, heat flow, and geoid, they rarely examine the effects of errors in age. This may have the effect of introducing subtle biases into inverse models of the oceanic lithosphere. Because the inverse problem for thermal structure is both ill-posed and ill-conditioned, these overlooked errors may have a greater effect than expected. The problem is further complicated by the non-uniform distribution of age and errors in age estimates; for example, only 30% of the oceanic lithosphere is older than 80 MY and less than 3% is older than 150 MY. To determine the potential strength of such biases, I have used the age and error maps of Mueller et al (2008) to forward model the bathymetry for half space and GDH1 plate models. For ages less than 20 MY, both models give similar results. The errors induced by uncertainty in age are relatively large and suggest that when possible young lithosphere should be excluded when examining the lithospheric thermal model. As expected, GDH1 bathymetry converges asymptotically on the theoretical result for error-free data for older data. The resulting uncertainty is nearly as large as that introduced by errors in the other parameters; in the absence of other errors, the models can only be distinguished for ages greater than 80 MY. These results suggest that the problem should be approached with the minimum possible number of variables. For example, examining the direct relationship of geoid to bathymetry or heat flow instead of their relationship to age should reduce uncertainties

  2. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  3. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  4. A Comprehensive Trainable Error Model for Sung Music Queries

    CERN Document Server

    Birmingham, W P; 10.1613/jair.1334

    2011-01-01

    We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of query-by-humming (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of error or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications.

  5. Which forcing data errors matter most when modeling seasonal snowpacks?

    Science.gov (United States)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly

  6. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Fractionally Integrated Models With ARCH Errors

    OpenAIRE

    Hauser, Michael A.; Kunst, Robert M.

    1993-01-01

    Abstract: We introduce ARFIMA-ARCH models which simultaneously incorporate fractional differencing and conditional heteroskedasticity. We develop the likelihood function and a numerical estimation procedure for this model class. Two ARCH models - Engle- and Weiss-type - are explicitly treated and stationarity conditions are derived. Finite-sample properties of the estimation procedure are explored by Monte Carlo simulation. An application to the Standard & Poor 500 Index indicates existence o...

  8. Effect of GPS errors on Emission model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...

  9. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  10. Reducing RANS Model Error Using Random Forest

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  11. Quantifying model structural error: Efficient Bayesian calibration of a regional groundwater flow model using surrogates and a data-driven error model

    Science.gov (United States)

    Xu, Tianfang; Valocchi, Albert J.; Ye, Ming; Liang, Feng

    2017-05-01

    Groundwater model structural error is ubiquitous, due to simplification and/or misrepresentation of real aquifer systems. During model calibration, the basic hydrogeological parameters may be adjusted to compensate for structural error. This may result in biased predictions when such calibrated models are used to forecast aquifer responses to new forcing. We investigate the impact of model structural error on calibration and prediction of a real-world groundwater flow model, using a Bayesian method with a data-driven error model to explicitly account for model structural error. The error-explicit Bayesian method jointly infers model parameters and structural error and thereby reduces parameter compensation. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models (based on machine learning techniques) as a substitute for the computationally expensive groundwater model. We demonstrate that with explicit treatment of model structural error, the Bayesian method yields parameter posterior distributions that are substantially different from those derived using classical Bayesian calibration that does not account for model structural error. We also found that the error-explicit Bayesian method gives significantly more accurate prediction along with reasonable credible intervals. Finally, through variance decomposition, we provide a comprehensive assessment of prediction uncertainty contributed from parameter, model structure, and measurement uncertainty. The results suggest that the error-explicit Bayesian approach provides a solution to real-world modeling applications for which data support the presence of model structural error, yet model deficiency cannot be specifically identified or corrected.

  12. Multiple models adaptive feedforward decoupling controller

    Institute of Scientific and Technical Information of China (English)

    Wang Xin; Li Shaoyuan; Wang Zhongjie

    2005-01-01

    When the parameters of the system change abruptly, a new multivariable adaptive feedforward decoupling controller using multiple models is presented to improve the transient response. The system models are composed of multiple fixed models, one free-running adaptive model and one re-initialized adaptive model. The fixed models are used to provide initial control to the process. The re-initialized adaptive model can be reinitialized as the selected model to improve the adaptation speed. The free-running adaptive controller is added to guarantee the overall system stability. At each instant, the best system model is selected according to the switching index and the corresponding controller is designed. During the controller design, the interaction is viewed as the measurable disturbance and eliminated by the choice of the weighting polynomial matrix. It not only eliminates the steady-state error but also decouples the system dynamically. The global convergence is obtained and several simulation examples are presented to illustrate the effectiveness of the proposed controller.

  13. Treatment of multiple network parameter errors through a genetic-based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Stacchini de Souza, Julio C.; Do Coutto Filho, Milton B.; Meza, Edwin B. Mitacc [Department of Electrical Engineering, Institute of Computing, Fluminense Federal University, Rua Passo da Patria, 156 - Sao Domingos, 24210-240 Niteroi, Rio de Janeiro (Brazil)

    2009-11-15

    This paper proposes a genetic algorithm-based methodology for network parameter estimation and correction. Network parameter errors may come from many different sources, such as: imprecise data provided by manufacturers, poor estimation of transmission lines lengths and changes in transmission network design which are not adequately updated in the corresponding database. Network parameter data are employed by almost all power system analysis tools, from real time monitoring to long-term planning. The presence of parameter errors contaminates the results obtained by these tools and compromises decision-making processes. To get rid of single or multiple network parameter errors, a methodology that combines genetic algorithms and power system state estimation is proposed. Tests with the IEEE 14-bus system and a real Brazilian system are performed to illustrate the proposed method. (author)

  14. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    Science.gov (United States)

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  15. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    Science.gov (United States)

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  16. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    Science.gov (United States)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  17. Multiscale measurement error models for aggregated small area health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-08-01

    Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates.

  18. Error detection and rectification in digital terrain models

    Science.gov (United States)

    Hannah, M. J.

    1979-01-01

    Digital terrain models produced by computer correlation of stereo images are likely to contain occasional gross errors in terrain elevation. These errors typically result from having mismatched sub-areas of the two images, a problem which can occur for a variety of image- and terrain-related reasons. Such elevation errors produce undesirable effects when the models are further processed, and should be detected and corrected as early in the processing as possible. Algorithms have been developed to detect and correct errors in digital terrain models. These algorithms focus on the use of constraints on both the allowable slope and the allowable change in slope in local areas around each point. Relaxation-like techniques are employed in the iteration of the detection and correction phases to obtain best results.

  19. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    Science.gov (United States)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  20. Identification of coefficients in platform drift error model

    Institute of Scientific and Technical Information of China (English)

    邓正隆; 徐松艳; 付振宪

    2002-01-01

    The identification of the coefficients in the drift error model of a floated gyro inertial nawgation plat-form was investigated by following the principle of the inertial navigation platform and using gyro and accelerom-eter output models, and a complete platform drift error model was established, with parameters as state varia-bles, thereby establishing the system state equation and observation equation. Since these two equations areboth nonlinear, the Extended Kalman Filter (EKF) was adopted. Then the problem of parameter identificationwas converted into a problem of state estimation. During the simulation, multi-position testing schemes were de-signed to motivated the parameters by gravity acceleration. Using these schemes, twenty-four error coefficientsof three gyros and six error coefficients of three accelerometers were identified, which showed the feasibility ofthis method.

  1. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliability of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity models have been explored......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  2. Multiplicity description by gluon model

    CERN Document Server

    Kokoulina, E S

    2015-01-01

    Study of high multiplicity events in proton-proton interactions is carried out at the U-70 accelerator (IHEP, Protvino). These events are extremely rare. Usually, Monte Carlo codes underestimate topological cross sections in this region. The gluon dominance model (GDM) was offered to describe them. It is based on QCD and a phenomenological scheme of a hadronization stage. This model indicates a recombination mechanism of hadronization and a gluon fission. Future program of the SVD Collaboration is aimed at studying a long-standing puzzle of excess soft photon yield and its connection with high multiplicity at the U-70 and Nuclotron facility at JINR, Dubna.

  3. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  4. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    ) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) the UNEEC method [2,3,7] which takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (e.g. neural networks or k-NN method) (c) the recent DUBRAE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals which first corrects the model residual and then employs an autoregressive statistical model for uncertainty prediction) [5] 2. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. For real complex non-linear functions (models) implemented in software various versions of the Monte Carlo simulation are used: values of parameters or inputs are sampled from the assumed distributions and the model is run multiple times to generate multiple outputs. The data generated by Monte Carlo analysis can be used to build a machine learning model which will be able to make predictions of model uncertainty for the future his method is named MLUE (Machine Learning for Uncertainty Estimation) and is covered in [4,6]. 3. Structural uncertainty stemming from inadequate model structure. The paper discusses the possibilities and experiences of building the models able to forecast (rather than analyse) residual and parametric uncertainty of hydrological models. References [1] Koenker, R., and G. Bassett (1978). Regression quantiles. Econometrica, 46(1), 33- 50, doi:10.2307/1913643. [2] D.L. Shrestha, D.P. Solomatine (2006). Machine learning approaches for estimation of prediction interval for the model output. Neural Networks J., 19(2), 225-235. [3] D.P. Solomatine, D.L. Shrestha (2009). A novel method to estimate model uncertainty using machine learning techniques. Water Resources Res. 45, W00B11. [4] D. L

  5. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  6. Forecasting the Euro exchange rate using vector error correction models

    NARCIS (Netherlands)

    Aarle, B. van; Bos, M.; Hlouskova, J.

    2000-01-01

    Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex

  7. Modeling of Bit Error Rate in Cascaded 2R Regenerators

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper

    2006-01-01

    This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...

  8. Comparative study and error analysis of digital elevation model interpolations

    Institute of Scientific and Technical Information of China (English)

    CHEN Ji-long; WU Wei; LIU Hong-bin

    2008-01-01

    Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.

  9. A model for navigational errors in complex environmental fields.

    Science.gov (United States)

    Postlethwaite, Claire M; Walker, Michael M

    2014-12-21

    Many animals are believed to navigate using environmental signals such as light, sound, odours and magnetic fields. However, animals rarely navigate directly to their target location, but instead make a series of navigational errors which are corrected during transit. In previous work, we introduced a model showing that differences between an animal׳s 'cognitive map' of the environmental signals used for navigation and the true nature of these signals caused a systematic pattern in orientation errors when navigation begins. The model successfully predicted the pattern of errors seen in previously collected data from homing pigeons, but underestimated the amplitude of the errors. In this paper, we extend our previous model to include more complicated distortions of the contour lines of the environmental signals. Specifically, we consider the occurrence of critical points in the fields describing the signals. We consider three scenarios and compute orientation errors as parameters are varied in each case. We show that the occurrence of critical points can be associated with large variations in initial orientation errors over a small geographic area. We discuss the implications that these results have on predicting how animals will behave when encountering complex distortions in any environmental signals they use to navigate.

  10. A cumulative entropy method for distribution recognition of model error

    Science.gov (United States)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  11. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    GIA modeling. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA. The thermal......During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model features relevant to sea level modeling...

  12. Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy

    Directory of Open Access Journals (Sweden)

    Barbara D. Klein

    1999-01-01

    Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.

  13. Improved assessment of multiple sclerosis lesion segmentation agreement via detection and outline error estimates

    Directory of Open Access Journals (Sweden)

    Wack David S

    2012-07-01

    Full Text Available Abstract Background Presented is the method “Detection and Outline Error Estimates” (DOEE for assessing rater agreement in the delineation of multiple sclerosis (MS lesions. The DOEE method divides operator or rater assessment into two parts: 1 Detection Error (DE -- rater agreement in detecting the same regions to mark, and 2 Outline Error (OE -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA of the raters' Region of Interests (ROIs. Results When correlated with MTA, neither DE (ρ = .056, p=.83 nor the ratio of OE to MTA (ρ = .23, p=.37, referred to as Outline Error Rate (OER, exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p  Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement.

  14. A priori discretization error metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  15. Two Error Models for Calibrating SCARA Robots based on the MDH Model

    Directory of Open Access Journals (Sweden)

    Li Xiaolong

    2017-01-01

    Full Text Available This paper describes the process of using two error models for calibrating Selective Compliance Assembly Robot Arm (SCARA robots based on the modified Denavit-Hartenberg(MDH model, with the aim of improving the robot's accuracy. One of the error models is the position error model, which uses robot position errors with respect to an accurate robot base frame built before the measurement commenced. The other model is the distance error model, which uses only the robot moving distance to calculate errors. Because calibration requires the end-effector to be accurately measured, a laser tracker was used to measure the robot position and distance errors. After calibrating the robot and, the end-effector locations were measured again compensating the error models' parameters obtained from the calibration. The finding is that the robot's accuracy improved greatly after compensating the calibrated parameters.

  16. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con

  17. Structure and Asymptotic theory for Nonlinear Models with GARCH Errors

    NARCIS (Netherlands)

    F. Chan (Felix); M.J. McAleer (Michael); M.C. Medeiros (Marcelo)

    2011-01-01

    textabstractNonlinear time series models, especially those with regime-switching and conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with li

  18. Calibrating Car-Following Model Considering Measurement Errors

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2013-01-01

    Full Text Available Car-following model has important applications in traffic and safety engineering. To enhance the accuracy of model in predicting behavior of individual driver, considerable studies strive to improve the model calibration technologies. However, microscopic car-following models are generally calibrated by using macroscopic traffic data ignoring measurement errors-in-variables that leads to unreliable and erroneous conclusions. This paper aims to develop a technology to calibrate the well-known Van Aerde model. Particularly, the effect of measurement errors-in-variables on the accuracy of estimate is considered. In order to complete calibration of the model using microscopic data, a new parameter estimate method named two-step approach is proposed. The result shows that the modified Van Aerde model to a certain extent is more reliable than the generic model.

  19. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  20. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  1. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    Science.gov (United States)

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  2. Structure and asymptotic theory for nonlinear models with GARCH errors

    Directory of Open Access Journals (Sweden)

    Felix Chan

    2015-01-01

    Full Text Available Nonlinear time series models, especially those with regime-switching and/or conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with little theoretical or statistical analysis associated with the structure of the processes or the associated asymptotic theory. In this paper, we derive sufficient conditions for strict stationarity and ergodicity of three different specifications of the first-order smooth transition autoregressions with heteroskedastic errors. This is essential, among other reasons, to establish the conditions under which the traditional LM linearity tests based on Taylor expansions are valid. We also provide sufficient conditions for consistency and asymptotic normality of the Quasi-Maximum Likelihood Estimator for a general nonlinear conditional mean model with first-order GARCH errors.

  3. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    Science.gov (United States)

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  4. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  5. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  6. Identification of multiple inputs single output errors-in-variables system using cumulant

    Institute of Scientific and Technical Information of China (English)

    Haihui Long; Jiankang Zhao

    2014-01-01

    A higher-order cumulant-based weighted least square (HOCWLS) and a higher-order cumulant-based iterative least square (HOCILS) are derived for multiple inputs single output (MISO) errors-in-variables (EIV) systems from noisy input/output data. Whether the noises of the input/output of the system are white or colored, the proposed algorithms can be insensitive to these noises and yield unbiased estimates. To realize adaptive pa-rameter estimates, a higher-order cumulant-based recursive least square (HOCRLS) method is also studied. Convergence analy-sis of the HOCRLS is conducted by using the stochastic process theory and the stochastic martingale theory. It indicates that the parameter estimation error of HOCRLS consistently converges to zero under a generalized persistent excitation condition. The use-fulness of the proposed algorithms is assessed through numerical simulations.

  7. An empirical assessment of exposure measurement error and effect attenuation in bipollutant epidemiologic models.

    Science.gov (United States)

    Dionisio, Kathie L; Baxter, Lisa K; Chang, Howard H

    2014-11-01

    Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from multipollutant models difficult to interpret. We aimed to quantify relationships between multiple pollutants and their associated exposure errors across metrics of exposure and to use empirical values to evaluate potential attenuation of coefficients in epidemiologic models. We used three daily exposure metrics (central-site measurements, air quality model estimates, and population exposure model estimates) for 193 ZIP codes in the Atlanta, Georgia, metropolitan area from 1999 through 2002 for PM2.5 and its components (EC and SO4), as well as O3, CO, and NOx, to construct three types of exposure error: δspatial (comparing air quality model estimates to central-site measurements), δpopulation (comparing population exposure model estimates to air quality model estimates), and δtotal (comparing population exposure model estimates to central-site measurements). We compared exposure metrics and exposure errors within and across pollutants and derived attenuation factors (ratio of observed to true coefficient for pollutant of interest) for single- and bipollutant model coefficients. Pollutant concentrations and their exposure errors were moderately to highly correlated (typically, > 0.5), especially for CO, NOx, and EC (i.e., "local" pollutants); correlations differed across exposure metrics and types of exposure error. Spatial variability was evident, with variance of exposure error for local pollutants ranging from 0.25 to 0.83 for δspatial and δtotal. The attenuation of model coefficients in single- and bipollutant epidemiologic models relative to the true value differed across types of exposure error, pollutants, and space. Under a classical exposure-error framework, attenuation may be

  8. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  9. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  10. Improved Systematic Pointing Error Model for the DSN Antennas

    Science.gov (United States)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  11. Stochastic modelling and analysis of IMU sensor errors

    Science.gov (United States)

    Zaho, Y.; Horemuz, M.; Sjöberg, L. E.

    2011-12-01

    The performance of a GPS/INS integration system is greatly determined by the ability of stand-alone INS system to determine position and attitude within GPS outage. The positional and attitude precision degrades rapidly during GPS outage due to INS sensor errors. With advantages of low price and volume, the Micro Electrical Mechanical Sensors (MEMS) have been wildly used in GPS/INS integration. Moreover, standalone MEMS can keep a reasonable positional precision only a few seconds due to systematic and random sensor errors. General stochastic error sources existing in inertial sensors can be modelled as (IEEE STD 647, 2006) Quantization Noise, Random Walk, Bias Instability, Rate Random Walk and Rate Ramp. Here we apply different methods to analyze the stochastic sensor errors, i.e. autoregressive modelling, Gauss-Markov process, Power Spectral Density and Allan Variance. Then the tests on a MEMS based inertial measurement unit were carried out with these methods. The results show that different methods give similar estimates of stochastic error model parameters. These values can be used further in the Kalman filter for better navigation accuracy and in the Doppler frequency estimate for faster acquisition after GPS signal outage.

  12. Application of variance components estimation to calibrate geoid error models.

    Science.gov (United States)

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  13. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems

    Directory of Open Access Journals (Sweden)

    Faridah Hani Mohamed Salleh

    2017-01-01

    Full Text Available Gene regulatory network (GRN reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C as a direct interaction (A → C. Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  14. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  15. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  16. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    Science.gov (United States)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  17. Doubly-Latent Models of School Contextual Effects: Integrating Multilevel and Structural Equation Approaches to Control Measurement and Sampling Error

    Science.gov (United States)

    Marsh, Herbert W.; Ludtke, Oliver; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthen, Bengt; Nagengast, Benjamin

    2009-01-01

    This article is a methodological-substantive synergy. Methodologically, we demonstrate latent-variable contextual models that integrate structural equation models (with multiple indicators) and multilevel models. These models simultaneously control for and unconfound measurement error due to sampling of items at the individual (L1) and group (L2)…

  18. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  19. Bayesian network models for error detection in radiotherapy plans.

    Science.gov (United States)

    Kalet, Alan M; Gennari, John H; Ford, Eric C; Phillips, Mark H

    2015-04-07

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network's conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  20. Incorporating experimental design and error into coalescent/mutation models of population history.

    Science.gov (United States)

    Knudsen, Bjarne; Miyamoto, Michael M

    2007-08-01

    Coalescent theory provides a powerful framework for estimating the evolutionary, demographic, and genetic parameters of a population from a small sample of individuals. Current coalescent models have largely focused on population genetic factors (e.g., mutation, population growth, and migration) rather than on the effects of experimental design and error. This study develops a new coalescent/mutation model that accounts for unobserved polymorphisms due to missing data, sequence errors, and multiple reads for diploid individuals. The importance of accommodating these effects of experimental design and error is illustrated with evolutionary simulations and a real data set from a population of the California sea hare. In particular, a failure to account for sequence errors can lead to overestimated mutation rates, inflated coalescent times, and inappropriate conclusions about the population. This current model can now serve as a starting point for the development of newer models with additional experimental and population genetic factors. It is currently implemented as a maximum-likelihood method, but this model may also serve as the basis for the development of Bayesian approaches that incorporate experimental design and error.

  1. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  2. Multivariate DCC-GARCH Model: -With Various Error Distributions

    OpenAIRE

    Orskaug, Elisabeth

    2009-01-01

    In this thesis we have studied the DCC-GARCH model with Gaussian, Student's $t$ and skew Student's t-distributed errors. For a basic understanding of the GARCH model, the univariate GARCH and multivariate GARCH models in general were discussed before the DCC-GARCH model was considered. The Maximum likelihood method is used to estimate the parameters. The estimation of the correctly specified likelihood is difficult, and hence the DCC-model was designed to allow for two stage estim...

  3. Error Assessment in Modeling with Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang

    2013-01-01

    To model a given time series $F(t)$ with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension $D$ is derived from the Hurst exponent $H$ via the relation $D=2-H$, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range $\\langle|F(t+\\tau)-F(t)|\\rangle$ on the time span $\\tau$. For fBms, the error of the rescaled range not only depends on data sampling but also varies with $H$ due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for $0error of $\\langle|F(t+\\tau)-F(t)|\\rangle$. The e...

  4. An Emprical Point Error Model for Tls Derived Point Clouds

    Science.gov (United States)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  5. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  6. Experimental study of a multiplicative model of multiple ionospheric reflections

    Science.gov (United States)

    Mirkotan, S. F.; Zhuravlev, S. V.; Kosovtsov, Iu. N.

    1983-04-01

    An important parameter of a partially scattered ionospheric signal is the signal-noise energy parameter beta. A new method for determining beta sub n (where n is the multiplicity of reflection) has been proposed on the basis of the statistical multiplicative model of Mirkotan et al. (1981, 1982). This paper describes an experimental verification of the proposed method; data on beta sub n obtained by the traditional method and by the new method are compared. In addition, the validity of the multiplicative model is evaluated, and features of the mechanism responsible for the multiple scattering of an ionospheric signal are examined.

  7. A Model for Geometry-Dependent Errors in Length Artifacts.

    Science.gov (United States)

    Sawyer, Daniel; Parry, Brian; Phillips, Steven; Blackburn, Chris; Muralikrishnan, Bala

    2012-01-01

    We present a detailed model of dimensional changes in long length artifacts, such as step gauges and ball bars, due to bending under gravity. The comprehensive model is based on evaluation of the gauge points relative to the neutral bending surface. It yields the errors observed when the gauge points are located off the neutral bending surface of a bar or rod but also reveals the significant error associated with out-of-straightness of a bar or rod even if the gauge points are located in the neutral bending surface. For example, one experimental result shows a length change of greater than 1.5 µm on a 1 m ball bar with an out-of-straightness of 0.4 mm. This and other results are in agreement with the model presented in this paper.

  8. Approximation error in PDE-based modelling of vehicular platoons

    Science.gov (United States)

    Hao, He; Barooah, Prabir

    2012-08-01

    We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.

  9. Identifying errors in dust models from data assimilation.

    Science.gov (United States)

    Pope, R J; Marsham, J H; Knippertz, P; Brooks, M E; Roberts, A J

    2016-09-16

    Airborne mineral dust is an important component of the Earth system and is increasingly predicted prognostically in weather and climate models. The recent development of data assimilation for remotely sensed aerosol optical depths (AODs) into models offers a new opportunity to better understand the characteristics and sources of model error. Here we examine assimilation increments from Moderate Resolution Imaging Spectroradiometer AODs over northern Africa in the Met Office global forecast model. The model underpredicts (overpredicts) dust in light (strong) winds, consistent with (submesoscale) mesoscale processes lifting dust in reality but being missed by the model. Dust is overpredicted in the Sahara and underpredicted in the Sahel. Using observations of lighting and rain, we show that haboobs (cold pool outflows from moist convection) are an important dust source in reality but are badly handled by the model's convection scheme. The approach shows promise to serve as a useful framework for future model development.

  10. Decision aids for multiple-decision disease management as affected by weather input errors.

    Science.gov (United States)

    Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D

    2011-06-01

    Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.

  11. Error checking and graphical representation of multiple-complete-digest (MCD) restriction-fragment maps.

    Science.gov (United States)

    Thayer, E C; Olson, M V; Karp, R M

    1999-01-01

    Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple-complete-digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman-Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman-Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map.

  12. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...

  13. Implementation of multiple 3D scans for error calculation on object digital reconstruction

    Directory of Open Access Journals (Sweden)

    Sidiropoulos Andreas

    2017-01-01

    Full Text Available Laser scanning is a widespread methodology of visualizing the natural environment and the manmade structures that exist in it. Laser scanners accomplish to digitalize our reality by making highly accurate measurements. Using these measurements they create a set of points in 3D space which is called point cloud and depicts an entire area or object or parts of them. Triangulation laser scanners use the triangle theories and they mainly are used to visualize handheld objects at a very close range from them. In many cases, users of such devices take for granted the accuracy specifications provided by laser scanner manufacturers and respective software and for many applications this is enough. In this paper we use point clouds, collected by a triangulation laser scanner under a repetition method, of two cubes that are geometrically similar to each other but differ in material. At first, the data of each repetition are being compared to each other to examine the consistency of the scanner under multiple measurements of the same scene. Then, the reconstruction of the objects‟ geometry is achieved and the results are being compared to the data derived by a digital caliper. The errors of calculated dimensions were estimated by the use of error propagation law.

  14. On Measurement of Efficiency of Cobb-Douglas Production Function with Additive and Multiplicative Errors

    Directory of Open Access Journals (Sweden)

    Md. Moyazzem Hossain

    2015-02-01

    Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.

  15. Error field and magnetic diagnostic modeling for W7-X

    Energy Technology Data Exchange (ETDEWEB)

    Lazerson, Sam A. [PPPL; Gates, David A. [PPPL; NEILSON, GEORGE H. [PPPL; OTTE, M.; Bozhenkov, S.; Pedersen, T. S.; GEIGER, J.; LORE, J.

    2014-07-01

    The prediction, detection, and compensation of error fields for the W7-X device will play a key role in achieving a high beta (Β = 5%), steady state (30 minute pulse) operating regime utilizing the island divertor system [1]. Additionally, detection and control of the equilibrium magnetic structure in the scrape-off layer will be necessary in the long-pulse campaign as bootstrapcurrent evolution may result in poor edge magnetic structure [2]. An SVD analysis of the magnetic diagnostics set indicates an ability to measure the toroidal current and stored energy, while profile variations go undetected in the magnetic diagnostics. An additional set of magnetic diagnostics is proposed which improves the ability to constrain the equilibrium current and pressure profiles. However, even with the ability to accurately measure equilibrium parameters, the presence of error fields can modify both the plasma response and diverter magnetic field structures in unfavorable ways. Vacuum flux surface mapping experiments allow for direct measurement of these modifications to magnetic structure. The ability to conduct such an experiment is a unique feature of stellarators. The trim coils may then be used to forward model the effect of an applied n = 1 error field. This allows the determination of lower limits for the detection of error field amplitude and phase using flux surface mapping. *Research supported by the U.S. DOE under Contract No. DE-AC02-09CH11466 with Princeton University.

  16. Errors Made by Elementary Fourth Grade Students When Modelling Word Problems and the Elimination of Those Errors through Scaffolding

    Science.gov (United States)

    Ulu, Mustafa

    2017-01-01

    This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…

  17. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  18. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  19. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    Science.gov (United States)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.

  20. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    Science.gov (United States)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  1. Robust Adaptive Beamforming for Multiple Signals of Interest with Cycle Frequency Error

    Directory of Open Access Journals (Sweden)

    Huang Chia-Cheng

    2010-01-01

    Full Text Available This paper deals with the problem of robust adaptive array beamforming by exploiting the signal cyclostationarity. Recently, a novel cyclostationarity-exploiting beamforming method has been proposed by J.-H. Lee and C.-C. Huang (2009 for dealing with the situation of multiple signals of interest (SOI based on the LS-SCORE algorithm. This method is referred to as the multiple LS-SCORE (MLS-SCORE algorithm. However, the MLS-SCORE algorithm suffers from severe performance degradation even if there is a small mismatch in the cycle frequencies of the SOIs. In this paper, we evaluate the performance of the MLS-SCORE algorithm in the presence of cycle frequency error (CFE. The output SINR of an adaptive beamforming using the MLS-SCORE algorithm deteriorates like a function as the number of data snapshots increases. To tackle this difficulty, we present an efficient method to find an appropriate estimate for each of the cycle frequencies of the SOIs iteratively to achieve robust adaptive beamforming against the CFE. Simulation results for showing the effectiveness of the proposed method are provided.

  2. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    Science.gov (United States)

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  3. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  4. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  5. Error modelling and experimental validation of a planar 3-PPR parallel manipulator with joint clearances

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Usi...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....

  6. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  7. Modeling considerations for using expression data from multiple species.

    Science.gov (United States)

    Siewert, Elizabeth; Kechris, Katerina J

    2013-10-15

    Although genome-wide expression data sets from multiple species are now more commonly generated, there have been few studies on how to best integrate this type of correlated data into models. Starting with a single-species, linear regression model that predicts transcription factor binding sites as a case study, we investigated how best to take into account the correlated expression data when extending this model to multiple species. Using a multivariate regression model, we accounted for the phylogenetic relationships among the species in two ways: (i) a repeated-measures model, where the error term is constrained; and (ii) a Bayesian hierarchical model, where the prior distributions of the regression coefficients are constrained. We show that both multiple-species models improve predictive performance over the single-species model. When compared with each other, the repeated-measures model outperformed the Bayesian model. We suggest a possible explanation for the better performance of the model with the constrained error term. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Analysis and Correction of Systematic Height Model Errors

    Science.gov (United States)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  9. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  10. Using Laser Scanners to Augment the Systematic Error Pointing Model

    Science.gov (United States)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  11. A Conceptual Framework to use Remediation of Errors Based on Multiple External Remediation Applied to Learning Objects

    Directory of Open Access Journals (Sweden)

    Maici Duarte Leite

    2014-09-01

    Full Text Available This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs in Learning Objects (LO. To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This study explored the remediation process of error by a classification of error in mathematical, providing support for the use of MERs with the remediation of error. The main objective of the proposed framework is to assist the individual learner in the recovery of a mistake made during the interaction with the LO, either through carelessness or lack of knowledge. Initially, we present the compilation of the classification of mathematical errors and their relationship with MERs. Later the concepts involved with conceptual framework proposed. Finally, an experiment with LO developed with a authoring tool called FARMA, using the conceptual framework for teaching the Pythagorean Theorem is presented.

  12. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  13. Modeling Approach of Regression Orthogonal Experiment Design for Thermal Error Compensation of CNC Turning Center

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The thermal induced errors can account for as much as 70% of the dimensional errors on a workpiece. Accurate modeling of errors is an essential part of error compensation. Base on analyzing the existing approaches of the thermal error modeling for machine tools, a new approach of regression orthogonal design is proposed, which combines the statistic theory with machine structures, surrounding condition, engineering judgements, and experience in modeling. A whole computation and analysis procedure is given. ...

  14. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  15. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  16. Model based correction of placement error in EBL and its verification

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-05-01

    In maskmaking, the main source of error contributing to placement error is charging. DISPLACE software corrects the placement error for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. One important step is the calibration of physical model. A test layout on a single calibration mask was used for calibration. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for the verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE. A good correlation of the measured and predicted values of the correction confirmed the high accuracy of the charging placement error correction.

  17. Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling

    DEFF Research Database (Denmark)

    Marinello, F.; Voltan, A.; Savio, E.

    2010-01-01

    This paper aimed at identifying the error sources that occur in dimensional measurements performed using atomic force microscopy. In particular, a set of characterization techniques for errors quantification is presented. The discussion on error sources is organized in four main categories......: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...

  18. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  19. Error estimates for the Skyrme-Hartree-Fock model

    CERN Document Server

    Erler, J

    2014-01-01

    There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

  20. Multiple scattering Model in GEANT4

    CERN Document Server

    Urbàn, L

    2002-01-01

    We present a new multiple scattering (MSC) model to simulate the multiple scattering of charged particles in matter. This model does not use the Moliere formalism, it is based on the more complete Lewis theory. The model simulates the scattering of the particle after a given step, computes the path length correction and the lateral displacement as well.

  1. Uncertainty and error in complex plasma chemistry models

    Science.gov (United States)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  2. An experimental test of the accumulated copying error model of cultural mutation for Acheulean handaxe size.

    Science.gov (United States)

    Kempe, Marius; Lycett, Stephen; Mesoudi, Alex

    2012-01-01

    Archaeologists interested in explaining changes in artifact morphology over long time periods have found it useful to create models in which the only source of change is random and unintentional copying error, or 'cultural mutation'. These models can be used as null hypotheses against which to detect non-random processes such as cultural selection or biased transmission. One proposed cultural mutation model is the accumulated copying error model, where individuals attempt to copy the size of another individual's artifact exactly but make small random errors due to physiological limits on the accuracy of their perception. Here, we first derive the model within an explicit mathematical framework, generating the predictions that multiple independently-evolving artifact chains should diverge over time such that their between-chain variance increases while the mean artifact size remains constant. We then present the first experimental test of this model in which 200 participants, split into 20 transmission chains, were asked to faithfully copy the size of the previous participant's handaxe image on an iPad. The experimental findings supported the model's prediction that between-chain variance should increase over time and did so in a manner quantitatively in line with the model. However, when the initial size of the image that the participants resized was larger than the size of the image they were copying, subjects tended to increase the size of the image, resulting in the mean size increasing rather than staying constant. This suggests that items of material culture formed by reductive vs. additive processes may mutate differently when individuals attempt to replicate faithfully the size of previously-produced artifacts. Finally, we show that a dataset of 2601 Acheulean handaxes shows less variation than predicted given our empirically measured copying error variance, suggesting that other processes counteracted the variation in handaxe size generated by perceptual

  3. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  4. Modeling Multiple Causes of Carcinogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Jones, T D

    1999-01-24

    An array of epidemiological results and databases on test animal indicate that risk of cancer and atherosclerosis can be up- or down-regulated by diet through a range of 200%. Other factors contribute incrementally and include the natural terrestrial environment and various human activities that jointly produce complex exposures to endotoxin-producing microorganisms, ionizing radiations, and chemicals. Ordinary personal habits and simple physical irritants have been demonstrated to affect the immune response and risk of disease. There tends to be poor statistical correlation of long-term risk with single agent exposures incurred throughout working careers. However, Agency recommendations for control of hazardous exposures to humans has been substance-specific instead of contextually realistic even though there is consistent evidence for common mechanisms of toxicological and carcinogenic action. That behavior seems to be best explained by molecular stresses from cellular oxygen metabolism and phagocytosis of antigenic invasion as well as breakdown of normal metabolic compounds associated with homeostatic- and injury-related renewal of cells. There is continually mounting evidence that marrow stroma, comprised largely of monocyte-macrophages and fibroblasts, is important to phagocytic and cytokinetic response, but the complex action of the immune process is difficult to infer from first-principle logic or biomarkers of toxic injury. The many diverse database studies all seem to implicate two important processes, i.e., the univalent reduction of molecular oxygen and breakdown of aginuine, an amino acid, by hydrolysis or digestion of protein which is attendant to normal antigen-antibody action. This behavior indicates that protection guidelines and risk coefficients should be context dependent to include reference considerations of the composite action of parameters that mediate oxygen metabolism. A logic of this type permits the realistic common-scale modeling of

  5. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  6. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  7. Multiple Imputation Strategies for Multiple Group Structural Equation Models

    Science.gov (United States)

    Enders, Craig K.; Gottschall, Amanda C.

    2011-01-01

    Although structural equation modeling software packages use maximum likelihood estimation by default, there are situations where one might prefer to use multiple imputation to handle missing data rather than maximum likelihood estimation (e.g., when incorporating auxiliary variables). The selection of variables is one of the nuances associated…

  8. Dynamical modelling of coordinated multiple robot systems

    Science.gov (United States)

    Hayati, Samad

    1987-01-01

    The state of the art in the modeling of the dynamics of coordinated multiple robot manipulators is summarized and various problems related to this subject are discussed. It is recognized that dynamics modeling is a component used in the design of controllers for multiple cooperating robots. As such, the discussion addresses some problems related to the control of multiple robots. The techniques used to date in the modeling of closed kinematic chains are summarized. Various efforts made to date for the control of coordinated multiple manipulators is summarized.

  9. Transmission of Successful Route Error Message(RERR) in Routing Aware Multiple Description Video Coding over Mobile Ad-Hoc Network

    CERN Document Server

    Shah, Kinjal; Sharma, Dharmendar; Mishra, Priyanka; Rakesh, Nitin

    2011-01-01

    Video transmission over mobile ad-hoc networks is becoming important as these networks become more widely used in the wireless networks. We propose a routing-aware multiple description video coding approach to support video transmission over mobile ad-hoc networks with single and multiple path transport. We build a model to estimate the packet loss probability of each packet transmitted over the network based on the standard ad-hoc routing messages and network parameters without losing the RERR message. We then calculate the frame loss probability in order to eliminate error without any loss of data.

  10. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-04-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane

  12. Universal geometric error modeling of the CNC machine tools based on the screw theory

    Science.gov (United States)

    Tian, Wenjie; He, Baiyan; Huang, Tian

    2011-05-01

    The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.

  13. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  14. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  15. Calibration of parallel kinematics machine using generalized distance error model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.

  16. Study on Laser Visual Measurement Method for Seamless Steel PipeStraightness Error by Multiple Line-structured Laser Sensors

    Institute of Scientific and Technical Information of China (English)

    陈长水; 谢建平; 王佩琳

    2001-01-01

    An original non-contact measurement method using multiple line-structured laser sensors is introduced for seamless steel pipe straightness error is in this paper. An arc appears on the surface of the measured seamless steel pipe against a line-structured laser source. After the image of the arc is accepted by a CCD camera, the coordinates of the center of the pipe cross-section circle containing the arc can be worked out through a certain algorithm. Similarly, multiple line-structured laser sensors are equipped parallel to the pipe. The straightness error of the seamless steel pipe, therefore, can be inferred from the coordinates of multiple cross-section centers obtained from every line-structured laser sernsor .

  17. Subspace electrode selection methodology for EEG multiple source localization error reduction due to uncertain conductivity values.

    Science.gov (United States)

    Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger

    2013-01-01

    This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio.

  18. Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts

    Science.gov (United States)

    2013-09-30

    tracer concentration measurements are available; circles indicate a regular 19 × 37 spatial grid. Time-Varying Error Covariance Models: Extending...2013. (Wikle) Invited; Using quadratic nonlinear statistical emulators to facilitate ocean biogeochemical data assimilation, Institute for

  19. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    OpenAIRE

    Nurhayadi; ., Subanar; Abdurakhman; Agus Maman Abadi

    2014-01-01

    Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced i...

  20. Error Modelling and Experimental Validation of a Planar 3-PPR Parallel Manipulator with Joint Clearances

    OpenAIRE

    Wu, Guanglei; Shaoping, Bai; Jørgen A., Kepler; Caro, Stéphane

    2012-01-01

    International audience; This paper deals with the error modelling and analysis of a 3-\\underline{P}PR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements a...

  1. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, ar

  2. A model study of potential sampling errors due to data scatter around expendable bathythermograph transects in the tropical Pacific

    Science.gov (United States)

    Mcphaden, Michael J.; Busalacchi, Antonio J.; Picaut, Joel; Raymond, Gary

    1988-01-01

    A linear multiple vertical-mode model described by McPhaden et al. (1988) is used to examine potential errors due to data scatter around expendable bathythermograph (XBT) transects in the tropical Pacific. Two methods of sampling are compared. In the first, the model was sampled along approximately straight lines of grid points corresponding to the mean positions of XBT tracks in the eastern, central, and western Pacific; in the second, the model was sampled again at the dates and locations of actual XTB casts for 1979-1983. The model indicates that the data scattered zonally around XBT transects in general can lead to about 2 dyn cm error in dynamic height in composite sections of XBT data. Errors larger than 2 dyn cm occurred in regions where XBT sample spacing in the zonal direction was insufficient to resolve Rossby wave variations in the model.

  3. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  4. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  5. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  6. An Activation-Based Model of Routine Sequence Errors

    Science.gov (United States)

    2015-04-01

    Occasionally, after completing a step, the screen cleared and the participants were interrupted to perform a simple arithmetic task; the interruption...accordance with the columnar data, the distribution of errors clusters around the +/-1 errors, and falls away in both directions as the error type gets...has been accessed in working memory, slowly decaying as time passes. Activation strength- ening is calculated according to: As = ln ( n ∑ j=1 t−dj

  7. Bayesian hierarchical error model for analysis of gene expression data

    National Research Council Canada - National Science Library

    Cho, HyungJun; Lee, Jae K

    2004-01-01

    .... Moreover, the same gene often shows quite heterogeneous error variability under different biological and experimental conditions, which must be estimated separately for evaluating the statistical...

  8. New mathematical model for error reduction of stressed lap

    Science.gov (United States)

    Zhao, Pu; Yang, Shuming; Sun, Lin; Shi, Xinyu; Liu, Tao; Jiang, Zhuangde

    2016-05-01

    Stressed lap, compared to traditional polishing methods, has high processing efficiency. However, this method has disadvantages in processing nonsymmetric surface errors. A basic-function method is proposed to calculate parameters for a stressed-lap polishing system. It aims to minimize residual errors and is based on a matrix and nonlinear optimization algorithm. The results show that residual root-mean-square could be >15% after one process for classical trefoil error. The surface period errors close to the lap diameter were removed efficiently, up to 50% material removal.

  9. Stochastic model error in the LANS-alpha and NS-alpha deconvolution models of turbulence

    CERN Document Server

    Olson, Eric

    2015-01-01

    This paper reports on a computational study of the model error in the LANS-alpha and NS-alpha deconvolution models of homogeneous isotropic turbulence. The focus is on how well the model error may be characterized by a stochastic force. Computations are also performed for a new turbulence model obtained as a rescaled limit of the deconvolution model. The technique used is to plug a solution obtained from direct numerical simulation of the incompressible Navier--Stokes equations into the competing turbulence models and to then compute the time evolution of the resulting residual. All computations have been done in two dimensions rather than three for convenience and efficiency. When the effective averaging length scale in any of the models is $\\alpha_0=0.01$ the time evolution of the root-mean-squared residual error grows as $\\sqrt t$. This growth rate is consistent with the hypothesis that the model error may be characterized by a stochastic force. When $\\alpha_0=0.20$ the residual error grows linearly. Linea...

  10. Allowing for model error in strong constraint 4D-Var

    Science.gov (United States)

    Howes, Katherine; Lawless, Amos; Fowler, Alison

    2016-04-01

    Four dimensional variational data assimilation (4D-Var) can be used to obtain the best estimate of the initial conditions of an environmental forecasting model, namely the analysis. In practice, when the forecasting model contains errors, the analysis from the 4D-Var algorithm will be degraded to allow for errors later in the forecast window. This work focusses on improving the analysis at the initial time by allowing for the fact that the model contains error, within the context of strong constraint 4D-Var. The 4D-Var method developed acknowledges the presence of random error in the model at each time step by replacing the observation error covariance matrix with an error covariance matrix that includes both observation error and model error statistics. It is shown that this new matrix represents the correct error statistics of the innovations in the presence of model error. A method for estimating this matrix using innovation statistics, without requiring prior knowledge of the model error statistics, is presented. The method is demonstrated numerically using a non-linear chaotic system with erroneous parameter values. We show that that the new method works to reduce the analysis error covariance when compared with a standard strong constraint 4D-Var scheme. We discuss the fact that an improved analysis will not necessarily provide a better forecast.

  11. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence

    CERN Document Server

    Pastawski, Fernando; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an exact isometry from bulk operators to boundary operators. The entire tensor network is a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed by Almheiri et. al in arXiv:1411.70...

  12. Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

  13. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the re

  14. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    , such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  15. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    2012-01-01

    , such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  16. Multiple comparisons in genetic association studies: a hierarchical modeling approach.

    Science.gov (United States)

    Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel

    2014-02-01

    Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).

  17. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  18. Analysis and Application of Multiple-Precision Computation and Round-off Error for Nonlinear Dynamical Systems

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This research reveals the dependency of floating point computation in nonlinear dynamical systems on machine precision and step-size by applying a multiple-precision approach in the Lorenz nonlinear equations. The paper also demonstrates the procedures for obtaining a real numerical solution in the Lorenz system with long-time integration and a new multiple-precision-based approach used to identify the maximum effective computation time (MECT) and optimal step-size (OS). In addition, the authors introduce how to analyze round-off error in a long-time integration in some typical cases of nonlinear systems and present its approximate estimate expression.

  19. Bayesian inversion of microtremor array dispersion data with hierarchical trans-dimensional earth and autoregressive error models

    Science.gov (United States)

    Molnar, S.; Dettmer, J.; Steininger, G.; Dosso, S. E.; Cassidy, J. F.

    2013-12-01

    This paper applies hierarchical, trans-dimensional Bayesian models for earth and residual-error parametrizations to the inversion of microtremor array dispersion data for shear-wave velocity (Vs) structure. The earth is parametrized in terms of flat-lying, homogeneous layers and residual errors are parametrized with a first-order autoregressive data-error model. The inversion accounts for the limited knowledge of the optimal earth and residual error model parametrization (e.g. the number of layers in the Vs profile) in the resulting Vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the index) are considered in the results. In addition, serial residual-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate residual-error statistics, and have no requirement for computing the inverse or determinant of a covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensions. The autoregressive process is restricted to first order and

  20. Corruption of parameter behavior and regionalization by model and forcing data errors: A Bayesian example using the SNOW17 model

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-07-01

    The current study evaluates the impacts of various sources of uncertainty involved in hydrologic modeling on parameter behavior and regionalization utilizing different Bayesian likelihood functions and the Differential Evolution Adaptive Metropolis (DREAM) algorithm. The developed likelihood functions differ in their underlying assumptions and treatment of error sources. We apply the developed method to a snow accumulation and ablation model (National Weather Service SNOW17) and generate parameter ensembles to predict snow water equivalent (SWE). Observational data include precipitation and air temperature forcing along with SWE measurements from 24 sites with diverse hydroclimatic characteristics. A multiple linear regression model is used to construct regionalization relationships between model parameters and site characteristics. Results indicate that model structural uncertainty has the largest influence on SNOW17 parameter behavior. Precipitation uncertainty is the second largest source of uncertainty, showing greater impact at wetter sites. Measurement uncertainty in SWE tends to have little impact on the final model parameters and resulting SWE predictions. Considering all sources of uncertainty, parameters related to air temperature and snowfall fraction exhibit the strongest correlations to site characteristics. Parameters related to the length of the melting period also show high correlation to site characteristics. Finally, model structural uncertainty and precipitation uncertainty dramatically alter parameter regionalization relationships in comparison to cases where only uncertainty in model parameters or output measurements is considered. Our results demonstrate that accurate treatment of forcing, parameter, model structural, and calibration data errors is critical for deriving robust regionalization relationships.

  1. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model

    Energy Technology Data Exchange (ETDEWEB)

    Altmeyer, S.; McCaskill, J. S.

    2001-06-18

    The error threshold for quasispecies in 1, 2, 3, and {infinity} dimensions is investigated by stochastic simulation and analytically. The results show a monotonic decrease in the maximal sustainable error probability with decreasing diffusion coefficient, independently of the spatial dimension. It is thereby established that physical interactions between sequences are necessary in order for spatial effects to enhance the stabilization of biological information. The analytically tractable behavior in an {infinity} -dimensional (simplex) space provides a good guide to the spatial dependence of the error threshold in lower dimensional Euclidean space.

  2. Statistical analysis-based error models for the Microsoft Kinect(TM) depth sensor.

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A

    2014-09-18

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions.

  3. Statistical Analysis-Based Error Models for the Microsoft Kinect™ Depth Sensor

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A.

    2014-01-01

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions. PMID:25237896

  4. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for

  5. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  6. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2000-01-01

    This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

  7. Making refractive error services sustainable: the International Eye Foundation model

    Directory of Open Access Journals (Sweden)

    Victoria M Sheffield

    2007-09-01

    Full Text Available The International Eye Foundation (IEF believes that the most effective strategy for making spectacles affordable and accessible is to integrate refractive error services into ophthalmic services and to run the refractive error service as a business – thereby making it sustainable. An optical service should be able to deal with high volumes of patients and generate enough revenue – not just to cover its own costs, but also to contribute to ophthalmic clinical services.

  8. Cerebellar encoding of multiple candidate error cues in the service of motor learning.

    Science.gov (United States)

    Guo, Christine C; Ke, Michael C; Raymond, Jennifer L

    2014-07-23

    For learning to occur through trial and error, the nervous system must effectively detect and encode performance errors. To examine this process, we designed a set of oculomotor learning tasks with more than one visual object providing potential error cues, as would occur in a natural visual scene. A task-relevant visual target and a task-irrelevant visual background both influenced vestibulo-ocular reflex learning in rhesus monkeys. Thus, motor learning does not identify a single error cue based on behavioral relevance, but can be simultaneously influenced by more than one cue. Moreover, the relative weighting of the different cues could vary. If the speed of the visual target's motion on the retina was low (≪1°/s), background motion dominated learning, but if target speed was high, the effects of the background were suppressed. The target and background motion had similar, nonlinear effects on the putative neural instructive signals carried by cerebellar climbing fibers, but with a stronger influence of the background on the climbing fibers than on learning. In contrast, putative neural instructive signals carried by the simple spikes of Purkinje cells were influenced solely by the motion of the visual target. Because they are influenced by different cues during training, joint control of learning by the climbing fibers and Purkinje cells may expand the learning capacity of the cerebellar circuit. Copyright © 2014 the authors 0270-6474/14/339880-11$15.00/0.

  9. Trends of Summer Air Temperatures in the Romanian Carpathians Detected by Using a Serially Correlated Errors Model

    Directory of Open Access Journals (Sweden)

    Adina-Eliza CROITORU

    2014-11-01

    Full Text Available This paper investigates summer temperature trends in the Romanian Carpathian Mountains, for three types of topographies: summit, slope and depression. We used a change-point regression model with serially correlated errors and compared it with a mainstream change-point model with independent errors. Statistical theory ensures that the former model gives a more accurate trend analysis than the latter model. For both models we identified strongly decreasing trends before the change-point and strongly increasing trends afterwards for most summer temperature series. The change-points are more consistent with each other, in the early 80’s, when using the former model. These general results occur for all topography types. A separate multiple regression model reveals that the temperature dynamics in the Romanian Carpathians can be explained by a linear effect of several major atmospheric circulation patterns

  10. Framework for Understanding Structural Errors (FUSE): a modular framework to diagnose differences between hydrological models

    Science.gov (United States)

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN-90 source code for FUSE is available upon request from the lead author.

  11. The problem with total error models in establishing performance specifications and a simple remedy.

    Science.gov (United States)

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results.

  12. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Science.gov (United States)

    Monfardini, Elisabetta; Hadj-Bouziane, Fadila; Meunier, Martine

    2014-01-01

    Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  13. Modeling and Sensitivity Analysis of Navigation Parameter Errors for Airborne Synthetic Aperture Radar Stereo Geolocation

    Institute of Scientific and Technical Information of China (English)

    PANG Lei; ZHANG Jixian; YAN Qin

    2010-01-01

    For the high-resolution airborne synthetic aperture radar (SAR) stereo geolocation application, the final geolocation accuracy is influenced by various error parameter sources. In this paper, an airborne SAR stereo geolocation parameter error model,involving the parameter errors derived from the navigation system on the flight platform, has been put forward. Moreover, a kind of near-direct method for modeling and sensitivity analysis of navigation parameter errors is also given. This method directly uses the ground reference to calculate the covariance matrix relationship between the parameter errors and the eventual geolocation errors for ground target points. In addition, utilizing true flight track parameters' errors, this paper gave a verification of the method and a corresponding sensitivity analysis for airborne SAR stereo geolocation model and proved its efficiency.

  14. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  15. An efficient algorithm for identifying matches with errors in multiple long molecular sequences.

    Science.gov (United States)

    Leung, M Y; Blaisdell, B E; Burge, C; Karlin, S

    1991-10-20

    An efficient algorithm is described for finding matches, repeats and other word relations, allowing for errors, in large data sets of long molecular sequences. The algorithm entails hashing on fixed-size words in conjunction with the use of a linked list connecting all occurrences of the same word. The average memory and run time requirement both increase almost linearly with the total sequence length. Some results of the program's performance on a database of Escherichia coli DNA sequences are presented.

  16. Mapping vulnerability of multiple aquifers using multiple models and fuzzy logic to objectively derive model structures.

    Science.gov (United States)

    Nadiri, Ata Allah; Sedghi, Zahra; Khatibi, Rahman; Gharekhani, Maryam

    2017-09-01

    Driven by contamination risks, mapping Vulnerability Indices (VI) of multiple aquifers (both unconfined and confined) is investigated by integrating the basic DRASTIC framework with multiple models overarched by Artificial Neural Networks (ANN). The DRASTIC framework is a proactive tool to assess VI values using the data from the hydrosphere, lithosphere and anthroposphere. However, a research case arises for the application of multiple models on the ground of poor determination coefficients between the VI values and non-point anthropogenic contaminants. The paper formulates SCFL models, which are derived from the multiple model philosophy of Supervised Committee (SC) machines and Fuzzy Logic (FL) and hence SCFL as their integration. The Fuzzy Logic-based (FL) models include: Sugeno Fuzzy Logic (SFL), Mamdani Fuzzy Logic (MFL), Larsen Fuzzy Logic (LFL) models. The basic DRASTIC framework uses prescribed rating and weighting values based on expert judgment but the four FL-based models (SFL, MFL, LFL and SCFL) derive their values as per internal strategy within these models. The paper reports that FL and multiple models improve considerably on the correlation between the modeled vulnerability indices and observed nitrate-N values and as such it provides evidence that the SCFL multiple models can be an alternative to the basic framework even for multiple aquifers. The study area with multiple aquifers is in Varzeqan plain, East Azerbaijan, northwest Iran. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Hubble Frontier Fields: systematic errors in strong lensing models of galaxy clusters - implications for cosmography

    Science.gov (United States)

    Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan

    2017-09-01

    Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.

  18. Balancing Type One and Two Errors in Multiple Testing for Differential Expression of Genes.

    Science.gov (United States)

    Gordon, Alexander; Chen, Linlin; Glazko, Galina; Yakovlev, Andrei

    2009-03-15

    A new procedure is proposed to balance type I and II errors in significance testing for differential expression of individual genes. Suppose that a collection, F(k), of k lists of selected genes is available, each of them approximating by their content the true set of differentially expressed genes. For example, such sets can be generated by a subsampling counterpart of the delete-d-jackknife method controlling the per-comparison error rate for each subsample. A final list of candidate genes, denoted by S(*), is composed in such a way that its contents be closest in some sense to all the sets thus generated. To measure "closeness" of gene lists, we introduce an asymmetric distance between sets with its asymmetry arising from a generally unequal assignment of the relative costs of type I and type II errors committed in the course of gene selection. The optimal set S(*) is defined as a minimizer of the average asymmetric distance from an arbitrary set S to all sets in the collection F(k). The minimization problem can be solved explicitly, leading to a frequency criterion for the inclusion of each gene in the final set. The proposed method is tested by resampling from real microarray gene expression data with artificially introduced shifts in expression levels of pre-defined genes, thereby mimicking their differential expression.

  19. Quantification of Transport Model Error Impacts on CO2 Inversions Using NASA's GEOS-5 GCM

    Science.gov (United States)

    Ott, L.; Pawson, S.; Weir, B.

    2014-12-01

    Remote sensing observations of CO2 offer the opportunity to reduce uncertainty in global carbon flux estimates. However, a number of studies have shown that inversion flux estimates are strongly influenced by errors in model transport. We will present results from modeling studies designed to quantify how such errors influence simulations of surface and column CO2 mixing ratios. These studies were conducted using the Goddard Earth Observing System, version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM) and the implementation of a suite of tracers associated with errors in boundary layer, convective, and large scale transport. Unlike traditional tagged tracers which are emitted by a certain process or region, error tracers are emitted as air parcels are transported through the atmosphere. The magnitude of error tracer emissions is based on previously published ensembles of AGCM simulations with perturbations to subgrid convective and boundary layer transport, and on comparisons of several reanalysis products to estimate errors in large scale wind fields. Transport error tracers are simulated with several different e-folding lifetimes (e.g. 1, 4, 10, and 30 day) to examine differences between transient and persistent model errors. This quantification of transport error is then used in an illustrative Bayesian synthesis inversion to demonstrate how transport errors influence surface CO2 mixing ratios and how this translates into inferred biosphere flux error.

  20. Spatial Distribution of the Errors in Modeling the Mid-Latitude Critical Frequencies by Different Models

    Science.gov (United States)

    Kilifarska, N. A.

    There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).

  1. Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.

  2. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  3. Error Propagation in Equations for Geochemical Modeling of Radiogenic Isotopes in Two-Component Mixing

    Indian Academy of Sciences (India)

    Surendra P Verma

    2000-03-01

    This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling and analytical errors. Two typical cases (``Small errors'' and ``Large errors'') are illustrated for mixing of Sr isotopes. Similar examples can be formulated for the other radiogenic isotopic ratios. Actual isotopic data for sediment and basalt samples from the Cocos plate are also included to further illustrate the use of these equations. The isotopic compositions of the predicted mixtures can be used to constrain the origin of magmas in the central part of the Mexican Volcanic Belt. These examples show the need of high quality experimental data for them to be useful in geochemical modeling of magmatic processes.

  4. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    -response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two......, a disadvantage of current semi-compensatory models versus compensatory models is their behaviorally non-realistic assumption of an independent error structure. This study proposes a novel semi-compensatory model incorporating a flexible error structure. Specifically, the model represents a sequence...

  5. FUZZY MODEL OPTIMIZATION FOR TIME SERIES DATA USING A TRANSLATION IN THE EXTENT OF MEAN ERROR

    Directory of Open Access Journals (Sweden)

    Nurhayadi

    2014-01-01

    Full Text Available Recently, many researchers in the field of writing about the prediction of stock price forecasting, electricity load demand and academic enrollment using fuzzy methods. However, in general, modeling does not consider the model position to actual data yet where it means that error is not been handled optimally. The error that is not managed well can reduce the accuracy of the forecasting. Therefore, the paper will discuss reducing error using model translation. The error that will be reduced is Mean Square Error (MSE. Here, the analysis is done mathematically and the empirical study is done by applying translation to fuzzy model for enrollment forecasting at the Alabama University. The results of this analysis show that the translation in the extent of mean error can reduce the MSE.

  6. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Institute of Scientific and Technical Information of China (English)

    Qianjian GUO; Shuo FAN; Rufeng XU; Xiang CHENG; Guoyong ZHAO; Jianguo YANG

    2017-01-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools,spindle thermal error measurement,modeling and compensation of a two turntable five-axis machine tool are researched.Measurement experiment of heat sources and thermal errors are carried out,and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling.In order to analyze the influence of different heat sources on spindle thermal errors,an ANN (artificial neural network) model is presented,and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN,a new ABCNN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors.In order to test the prediction performance of ABC-NN model,an experiment system is developed,the prediction results of LSR (least squares regression),ANN and ABC-NN are compared with the measurement results of spindle thermal errors.Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN,and the residual error is smaller than 3 μm,the new modeling method is feasible.The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  7. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Science.gov (United States)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  8. Error assessment of digital elevation models obtained by interpolation

    Directory of Open Access Journals (Sweden)

    Jean François Mas

    2009-10-01

    Full Text Available Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE. Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representación de la superficie terrestre para el análisis del terreno y que son ampliamente utilizados en ciencias ambientales. Los resultados obtenidos muestran que la resolución, el método de interpolación y los insumos (curvas de nivel solas o con datos de escurrimientos y puntos acotados influyen de manera importante en la magnitud de la cantidad de los errores generados en el MDE. En este estudio, que se llevó a cabo con base en curvas de nivel cada 50 m en una zona montañosa, la resolución más idónea fue de 30 m. El MDE con el menor error (Error Medio Cuadrático −EMC− de 7.3 m fue obtenido con ARC/INFO. Sin embargo, programas sin costo como NEWMIEL o ILWIS permitieron la obtención de resultados con un EMC de 10 m.

  9. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  10. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS radio occultation (RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3 % in bending angle, 0.1 % in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters and account for vertical, latitudinal, and seasonal variations. In the model, which spans the altitude range from 4 km to 35 km, a constant error is adopted around the tropopause region amounting to 0.8 % for bending angle, 0.35 % for refractivity, 0.15 % for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases exponentially. The observational error model is the same for UCAR and WEGC data but due to somewhat different error characteristics below about 10 km and above about 20 km some parameters have to be adjusted. Overall, the observational error model is easily applicable and

  11. Meta Modeling of Transmission Error for Spur, Helical and Planetary Gears for Wind Turbine Application

    OpenAIRE

    Irfan, Muhammad

    2013-01-01

    Detailed analysis of drive train dynamics requires accounting for the transmission error that arises in gears. However, the direct computation of the transmission error requires a 3-dimensional contact analysis with correct gear geometry, which is impractically computationally intense. Therefore, a simplified representation of the transmission error is desired, a so-called meta-model, is developed. The model is based on response surface method, and the coefficients of the angle-dependent tran...

  12. Correction of approximation errors with Random Forests applied to modelling of aerosol first indirect effect

    Directory of Open Access Journals (Sweden)

    A. Lipponen

    2013-04-01

    Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.

  13. Contaminant point source localization error estimates as functions of data quantity and model quality

    Science.gov (United States)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  14. Quantifying uncertainty in climatological fields from GPS radio occultation: an empirical-analytical error model

    Directory of Open Access Journals (Sweden)

    B. Scherllin-Pirscher

    2011-05-01

    Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards

  15. Development of an RTK-GPS Positioning Application with an Improved Position Error Model for Smartphones

    Directory of Open Access Journals (Sweden)

    Dongha Lee

    2012-09-01

    Full Text Available This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  16. Development of an RTK-GPS positioning application with an improved position error model for smartphones.

    Science.gov (United States)

    Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha

    2012-09-25

    This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.

  17. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  18. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  19. Analysis of errors in spectral reconstruction with a Laplace transform pair model

    Energy Technology Data Exchange (ETDEWEB)

    Archer, B.R.; Bushong, S.C. (Baylor Univ., Houston, TX (USA). Coll. of Medicine); Wagner, L.K. (Texas Univ., Houston (USA). Dept. of Radiology); Johnston, D.A.; Almond, P.R. (Anderson (M.D.) Hospital and Tumor Inst., Houston, TX (USA))

    1985-05-01

    The sensitivity of a Laplace transform pair model for spectral reconstruction to random errors in attenuation measurements of diagnostic x-ray units has been investigated. No spectral deformation or significant alteration resulted from the simulated attenuation errors. It is concluded that the range of spectral uncertainties to be expected from the application of this model is acceptable for most scientific applications.

  20. Modeling Distance and Bandwidth Dependency of TOA-Based UWB Ranging Error for Positioning

    NARCIS (Netherlands)

    Bellusci, G.; Janssen, G.J.M.; Yan, J.; Tiberius, C.C.J.M.

    2009-01-01

    A statistical model for the range error provided by TOA estimation using UWB signals is given, based on UWB channel measurements between 3.1 and 10.6 GHz. The range error has been modeled as a Gaussian random variable for LOS and as a combination of a Gaussian and an exponential random variable for

  1. On the Influence of Weather Forecast Errors in Short-Term Load Forecasting Models

    OpenAIRE

    Fay, D; Ringwood, John; Condon, M.

    2004-01-01

    Weather information is an important factor in load forecasting models. This weather information usually takes the form of actual weather readings. However, online operation of load forecasting models requires the use of weather forecasts, with associated weather forecast errors. A technique is proposed to model weather forecast errors to reflect current accuracy. A load forecasting model is then proposed which combines the forecasts of several load forecasting models. This approach allows the...

  2. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  3. Empirical analysis and modeling of errors of atmospheric profiles from GPS radio occultation

    Directory of Open Access Journals (Sweden)

    U. Foelsche

    2011-09-01

    Full Text Available The utilization of radio occultation (RO data in atmospheric studies requires precise knowledge of error characteristics. We present results of an empirical error analysis of GPS RO bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We find very good agreement between data characteristics of different missions (CHAMP, GRACE-A, and Formosat-3/COSMIC (F3C. In the global mean, observational errors (standard deviation from "true" profiles at mean tangent point location agree within 0.3% in bending angle, 0.1% in refractivity, and 0.2 K in dry temperature at all altitude levels between 4 km and 35 km. Above 35 km the increase of the CHAMP raw bending angle observational error is more pronounced than that of GRACE-A and F3C leading to a larger observational error of about 1% at 42 km. Above ≈20 km, the observational errors show a strong seasonal dependence at high latitudes. Larger errors occur in hemispheric wintertime and are associated mainly with background data used in the retrieval process particularly under conditions when ionospheric residual is large. The comparison between UCAR and WEGC results (both data centers have independent inversion processing chains reveals different magnitudes of observational errors in atmospheric parameters, which are attributable to different background fields used. Based on the empirical error estimates, we provide a simple analytical error model for GPS RO atmospheric parameters for the altitude range of 4 km to 35 km and up to 50 km for UCAR raw bending angle and refractivity. In the model, which accounts for vertical, latitudinal, and seasonal variations, a constant error is adopted around the tropopause region amounting to 0.8% for bending angle, 0.35% for refractivity, 0.15% for dry pressure, 10 m for dry geopotential height, and 0.7 K for dry temperature. Below this region the observational error increases following an inverse height power-law and above it increases

  4. Error Modeling and Compensation of Circular Motion on a New Circumferential Drilling System

    Directory of Open Access Journals (Sweden)

    Qiang Fang

    2015-01-01

    Full Text Available A new flexible circumferential drilling system is proposed to drill on the fuselage docking area. To analyze the influence of the circular motion error to the drilling accuracy, the nominal forward kinematic model is derived using Denavit-Hartenberg (D-H method and this model is further developed to model the kinematic errors caused by circular positioning error and synchronization error using homogeneous transformation matrices (HTM. A laser tracker is utilized to measure the circular motion error of the two measurement points at both sides. A circular motion compensation experiment is implemented according to the calculated positioning error and synchronization error. Experimental results show that the positioning error and synchronization error were reduced by 65.0% and 58.8%, respectively, due to the adopted compensation, and therefore the circular motion accuracy is substantially improved. Finally, position errors of the two measurement points are analyzed to have little influence on the measurement result and the validity of the proposed compensation method is proved.

  5. Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model

    Directory of Open Access Journals (Sweden)

    N. Rahpoe

    2013-10-01

    Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.

  6. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Science.gov (United States)

    Tsokos, C. P.

    1980-01-01

    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  7. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    Science.gov (United States)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  8. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  9. Genotype-based association mapping of complex diseases: gene-environment interactions with multiple genetic markers and measurement error in environmental exposures.

    Science.gov (United States)

    Lobach, Iryna; Fan, Ruzong; Carroll, Raymond J

    2010-12-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequilibrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development.

  10. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  11. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  12. Numerical study of an error model for a strap-down INS

    Science.gov (United States)

    Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.

    2016-10-01

    The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.

  13. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    OpenAIRE

    Jingan Feng; Xiaoqi Tang; Yanlei Li; Bao Song

    2016-01-01

    This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system) model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are use...

  14. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  15. Realistic face modeling based on multiple deformations

    Institute of Scientific and Technical Information of China (English)

    GONG Xun; WANG Guo-yin

    2007-01-01

    On the basis of the assumption that the human face belongs to a linear class, a multiple-deformation model is proposed to recover face shape from a few points on a single 2D image. Compared to the conventional methods, this study has the following advantages. First, the proposed modified 3D sparse deforming model is a noniterative approach that can compute global translation efficiently and accurately. Subsequently, the overfitting problem can be alleviated based on the proposed multiple deformation model. Finally, by keeping the main features, the texture generated is realistic. The comparison results show that this novel method outperforms the existing methods by using ground truth data and that realistic 3D faces can be recovered efficiently from a single photograph.

  16. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    Science.gov (United States)

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  17. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  18. Unravelling the Sources of Climate Model Errors in Subpolar Gyre Sea-Surface Temperatures

    Science.gov (United States)

    Rubino, Angelo; Zanchettin, Davide

    2017-04-01

    Climate model biases are systematic errors affecting geophysical quantities simulated by coupled general circulation models and Earth system models against observational targets. To this regard, biases affecting sea-surface temperatures (SSTs) are a major concern due to the crucial role of SST in the dynamical coupling between the atmosphere and the ocean, and for the associated variability. Strong SST biases can be detrimental for the overall quality of historical climate simulations, they contribute to uncertainty in simulated features of climate scenarios and complicate initialization and assessment of decadal climate prediction experiments. We use a dynamic linear model developed within a Bayesian hierarchical framework for a probabilistic assessment of spatial and temporal characteristics of SST errors in ensemble climate simulations. In our formulation, the statistical model distinguishes between local and regional errors, further separated into seasonal and non-seasonal components. This contribution, based on a framework developed for the study of biases in the Tropical Atlantic in the frame of the European project PREFACE, focuses on the subpolar gyre region in the North Atlantic Ocean, where climate models are typically affected by a strong cold SST bias. We will use results from an application of our statistical model to an ensemble of hindcasts with the MiKlip prototype system for decadal climate predictions to demonstrate how the decadal evolution of model errors toward the subpolar gyre cold bias is substantially shaped by a seasonal signal. We will demonstrate that such seasonal signal stems from the superposition of propagating large-scale seasonal errors originated in the Labrador Sea and of large-scale as well as mesoscale seasonal errors originated along the Gulf Stream. Based on these results, we will discuss how pronounced distinctive characteristics of the different error components distinguished by our model allow for a clearer connection

  19. A Multiple Bridge for Elimination of Contact-Resistance Errors in Resistance Strain-Gage Measurements

    Science.gov (United States)

    1946-03-01

    g@ge. ‘ ,, 3. If’&’ ms a,b or h)k are used and a m&ing-coil galva- nometersis the’detector, an appreciable &Gsistance is introductid ‘into this...denoglnator because]it would oontrihuto only third-order % ms .” I;merting equation (39) into the identity- . . iGl ( )‘iG2””b ‘Gl”- ‘G2 (40) (37) may...measurement of alternating strains through slip .rings. --- Ii .,+- ● A A s r r Rh s Two-pole multipla -” position switch R Single-pole multiple

  20. Model for Dynamic Multiple of CPPI Strategy

    Directory of Open Access Journals (Sweden)

    Guangyuan Xing

    2014-01-01

    Full Text Available Focusing on the parameter “Multiple” of CPPI strategy, this study proposes a dynamic setting model of multiple for gap risk management purpose. First, CPPI gap risk is measured as the probability that the value loss of active asset exceeds its allowed maximum drop determined by a given multiple setting. Moreover, according to the statistical estimation using SV-EVT approach, a dynamic choice of multiple is detailed as a function of time-varying asset volatility, expected loss, and the possibility of occurrence of extreme events in the active asset returns illustrated empirically on Shanghai composite index data. This study not only enriches the literature of dynamic proportion portfolio insurance, but also provides a practical reference for CPPI investors to choose a moderate risky exposure achieving gap risk management, which promotes CPPI’s application in emerging capital market.

  1. A New Method for Identifying the Model Error of Adjustment System

    Institute of Scientific and Technical Information of China (English)

    TAO Benzao; ZHANG Chaoyu

    2005-01-01

    Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.

  2. Removing Specification Errors from the Usual Formulation of Binary Choice Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2016-06-01

    Full Text Available We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

  3. General expression of double ellipsoidal heat source model and its error analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In order to analyze the maximum power density error with different heat flux distribution parameter values for double ellipsoidal heat source model, a general expression of double ellipsoidal heat source model was derived from Goldak double ellipsoidal heat source model, and the error of maximum power density was analyzed under this foundation. The calculation error of thermal cycling parameters caused by the maximum power density error was compared quantitatively by numerical simulation. The results show that for guarantee the accuracy of welding numerical simulation, it is better to introduce an error correction coefficient into the Goldak double ellipsoidal heat source model expression. And, heat flux distribution parameter should get higher value for the higher power density welding methods.

  4. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  5. Macroscopic model and truncation error of discrete Boltzmann method

    Science.gov (United States)

    Hwang, Yao-Hsin

    2016-10-01

    A derivation procedure to secure the macroscopically equivalent equation and its truncation error for discrete Boltzmann method is proffered in this paper. Essential presumptions of two time scales and a small parameter in the Chapman-Enskog expansion are disposed of in the present formulation. Equilibrium particle distribution function instead of its original non-equilibrium form is chosen as key variable in the derivation route. Taylor series expansion encompassing fundamental algebraic manipulations is adequate to realize the macroscopically differential counterpart. A self-contained and comprehensive practice for the linear one-dimensional convection-diffusion equation is illustrated in details. Numerical validations on the incurred truncation error in one- and two-dimensional cases with various distribution functions are conducted to verify present formulation. As shown in the computational results, excellent agreement between numerical result and theoretical prediction are found in the test problems. Straightforward extensions to more complicated systems including convection-diffusion-reaction, multi-relaxation times in collision operator as well as multi-dimensional Navier-Stokes equations are also exposed in the Appendix to point out its expediency in solving complicated flow problems.

  6. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  7. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  8. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  9. Model of Head-Positioning Error Due to Rotational Vibration of Hard Disk Drives

    Science.gov (United States)

    Matsuda, Yasuhiro; Yamaguchi, Takashi; Saegusa, Shozo; Shimizu, Toshihiko; Hamaguchi, Tetsuya

    An analytical model of head-positioning error due to rotational vibration of a hard disk drive is proposed. The model takes into account the rotational vibration of the base plate caused by the reaction force of the head-positioning actuator, the relationship between the rotational vibration and head-track offset, and the sensitivity function of track-following feedback control. Error calculated by the model agrees well with measured error. It is thus concluded that this model can predict the data transfer performance of a disk drive in read mode.

  10. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  11. Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Whitmore, J.; Blair, N.; Dobos, A. P.

    2014-08-01

    This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% for all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.

  12. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    Science.gov (United States)

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  13. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  14. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

    2013-07-23

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  15. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-09-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  16. Thermal Error Modelling of the Spindle Using Neurofuzzy Systems

    Directory of Open Access Journals (Sweden)

    Jingan Feng

    2016-01-01

    Full Text Available This paper proposes a new combined model to predict the spindle deformation, which combines the grey models and the ANFIS (adaptive neurofuzzy inference system model. The grey models are used to preprocess the original data, and the ANFIS model is used to adjust the combined model. The outputs of the grey models are used as the inputs of the ANFIS model to train the model. To evaluate the performance of the combined model, an experiment is implemented. Three Pt100 thermal resistances are used to monitor the spindle temperature and an inductive current sensor is used to obtain the spindle deformation. The experimental results display that the combined model can better predict the spindle deformation compared to BP network, and it can greatly improve the performance of the spindle.

  17. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    Science.gov (United States)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  18. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    Science.gov (United States)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all

  19. An Error Model for the Cirac-Zoller CNOT gate

    CERN Document Server

    Felloni, Sara

    2009-01-01

    In the framework of ion-trap quantum computing, we develop a characterization of experimentally realistic imperfections which may affect the Cirac-Zoller implementation of the CNOT gate. The CNOT operation is performed by applying a protocol of five laser pulses of appropriate frequency and polarization. The laser-pulse protocol exploits auxiliary levels, and its imperfect implementation leads to unitary as well as non-unitary errors affecting the CNOT operation. We provide a characterization of such imperfections, which are physically realistic and have never been considered before to the best of our knowledge. Our characterization shows that imperfect laser pulses unavoidably cause a leak of information from the states which alone should be transformed by the ideal gate, into the ancillary states exploited by the experimental implementation.

  20. Modeling and Error Analysis of a Superconducting Gravity Gradiometer.

    Science.gov (United States)

    1979-08-01

    gradioemetry. The lower bound of "nl ?~ 147 1 mmmin, I tR~ it -ao r -p’.., r- , -. UNCLASSIFIED SECURIT \\, CLASSIFICATIONI 0- THIS PAGE(47hen Dftf...02)[go - " A] " (4.67) The percent error 2 due to scale factor mismatch is 4g O " 1 gi (102( =~ ~-a2) )~2 ’ (a (4.68) since goz > rz i typically...ALP 92960p ( 10) ALP’#2-&LP)c 2*L *AN.1)+4*ALP 2 LOG(ALP).2.8.ALe92.tLOQ(hl4Pv+sALP92o𔄂.ALP) 4G ?.f(t4eA.ALP92.8.AeA iL.)LO(ALP)2AAL,,*2AAR).LA.1

  1. Approach for wideband direction-of-arrival estimation in the presence of array model errors

    Institute of Scientific and Technical Information of China (English)

    Chen Deli; Zhang Cong; Tao Huamin; Lu Huanzhang

    2009-01-01

    The presence of array imperfection and mutual coupling in sensor arrays poses several challenges for development of effective algorithms for the direction-of-arrival (DOA) estimation problem in array processing. A correlation domain wideband DOA estimation algorithm without array calibration is proposed, to deal with these array model errors, using the arbitrary antenna array of omnidirectional elements. By using the matrix operators that have the memory and oblivion characteristics, this algorithm can separate the incident signals effectively. Compared with other typical wideband DOA estimation algorithms based on the subspace theory, this algorithm can get robust DOA estimation with regard to position error, gain-phase error, and mutual coupling, by utilizing a relaxation technique based on signal separation. The signal separation category and the robustness of this algorithm to the array model errors are analyzed and proved. The validity and robustness of this algorithm, in the presence of array model errors, are confirmed by theoretical analysis and simulation results.

  2. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  3. Doubly-Latent Models of School Contextual Effects: Integrating Multilevel and Structural Equation Approaches to Control Measurement and Sampling Error.

    Science.gov (United States)

    Marsh, Herbert W; Lüdtke, Oliver; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthén, Bengt; Nagengast, Benjamin

    2009-11-30

    This article is a methodological-substantive synergy. Methodologically, we demonstrate latent-variable contextual models that integrate structural equation models (with multiple indicators) and multilevel models. These models simultaneously control for and unconfound measurement error due to sampling of items at the individual (L1) and group (L2) levels and sampling error due the sampling of persons in the aggregation of L1 characteristics to form L2 constructs. We consider a set of models that are latent or manifest in relation to sampling items (measurement error) and sampling of persons (sampling error) and discuss when different models might be most useful. We demonstrate the flexibility of these 4 core models by extending them to include random slopes, latent (single-level or cross-level) interactions, and latent quadratic effects. Substantively we use these models to test the big-fish-little-pond effect (BFLPE), showing that individual student levels of academic self-concept (L1-ASC) are positively associated with individual level achievement (L1-ACH) and negatively associated with school-average achievement (L2-ACH)-a finding with important policy implications for the way schools are structured. Extending tests of the BFLPE in new directions, we show that the nonlinear effects of the L1-ACH (a latent quadratic effect) and the interaction between gender and L1-ACH (an L1 × L1 latent interaction) are not significant. Although random-slope models show no significant school-to-school variation in relations between L1-ACH and L1-ASC, the negative effects of L2-ACH (the BFLPE) do vary somewhat with individual L1-ACH. We conclude with implications for diverse applications of the set of latent contextual models, including recommendations about their implementation, effect size estimates (and confidence intervals) appropriate to multilevel models, and directions for further research in contextual effect analysis.

  4. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  5. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    Science.gov (United States)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  6. In vivo models of multiple myeloma (MM).

    Science.gov (United States)

    Sanchez, Eric; Chen, Haiming; Berenson, James R

    2014-06-01

    The development of the plasma cell tumor (PCT) model was the first widely accepted in vivo model of multiple myeloma (MM). Potter and colleagues used this chemically induced PCT model to study the pathophysiology of malignant plasma cells and also used it to screen anti-MM agents. Two decades later the C57BL/KaLwRij mouse strain was found to spontaneously develop MM. Testing of pamidronate using this endogenously arising MM model revealed significant reductions in MM-associated bone disease, which was subsequently confirmed in human trials in MM patients. Transgenic models have also been developed in which the MM is localized in the bone marrow causing lytic bone lesions. Experiments in a transgenic model showed that a new oral proteasome inhibitor was effective at reducing MM burden. A clinical trial later confirmed this observation and validated the model. The xenograft model has been used to grow human MM in immunocompromised mice. The xenograft models of MM have been very useful in optimizing drug schedules and doses, which have helped in the treatments given to MM patients. However, in vivo models have been criticized for having a low clinical predictive power of new chemical entities (NCEs). Despite this, the knowledge gained from in vivo models of MM has without a doubt benefited MM patients.

  7. A novel data-driven approach to model error estimation in Data Assimilation

    Science.gov (United States)

    Pathiraja, Sahani; Moradkhani, Hamid; Marshall, Lucy; Sharma, Ashish

    2016-04-01

    Error characterisation is a fundamental component of Data Assimilation (DA) studies. Effectively describing model error statistics has been a challenging area, with many traditional methods requiring some level of subjectivity (for instance in defining the error covariance structure). Recent advances have focused on removing the need for tuning of error parameters, although there are still some outstanding issues. Many methods focus only on the first and second moments, and rely on assuming multivariate Gaussian statistics. We propose a non-parametric, data-driven framework to estimate the full distributional form of model error, ie. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered, without needing to assign error characteristics/devise stochastic perturbations for individual components of model uncertainty (eg. input, parameter and structural). A training period is used to derive the error distribution of observed variables, conditioned on (potentially hidden) states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The framework is discussed in detail, and an application to a hydrologic case study with hidden states for one-day ahead streamflow prediction is presented. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard tuning approach.

  8. Error modeling and tolerance design of a parallel manipulator with full-circle rotation

    Directory of Open Access Journals (Sweden)

    Yanbing Ni

    2016-05-01

    Full Text Available A method for improving the accuracy of a parallel manipulator with full-circle rotation is systematically investigated in this work via kinematic analysis, error modeling, sensitivity analysis, and tolerance allocation. First, a kinematic analysis of the mechanism is made using the space vector chain method. Using the results as a basis, an error model is formulated considering the main error sources. Position and orientation error-mapping models are established by mathematical transformation of the parallelogram structure characteristics. Second, a sensitivity analysis is performed on the geometric error sources. A global sensitivity evaluation index is proposed to evaluate the contribution of the geometric errors to the accuracy of the end-effector. The analysis results provide a theoretical basis for the allocation of tolerances to the parts of the mechanical design. Finally, based on the results of the sensitivity analysis, the design of the tolerances can be solved as a nonlinearly constrained optimization problem. A genetic algorithm is applied to carry out the allocation of the manufacturing tolerances of the parts. Accordingly, the tolerance ranges for nine kinds of geometrical error sources are obtained. The achievements made in this work can also be applied to other similar parallel mechanisms with full-circle rotation to improve error modeling and design accuracy.

  9. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  10. 3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, A; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  11. 3D CMM Strain-Gauge Triggering Probe Error Characteristics Modeling

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, Adam; Fan, Zhun;

    2008-01-01

    The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generat...

  12. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    Science.gov (United States)

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  13. Bit Error Rate Performance for Multicarrier Code Division Multiple Access over Generalized η-μ Fading Environment

    Directory of Open Access Journals (Sweden)

    James Osuru Mark

    2011-01-01

    Full Text Available The multicarrier code division multiple access (MC-CDMA system has received a considerable attention from researchers owing to its great potential in achieving high data rates transmission in wireless communications. Due to the detrimental effects of multipath fading the performance of the system degrades. Similarly, the impact of non-orthogonality of spreading codes can exist and cause interference. This paper addresses the performance of multicarrier code division multiple access system under the influence of frequency selective generalized η-µ  fading channel and multiple access interference caused by other active users to the desired one. We apply Gaussian approximation technique to analyse the performance of the system. The avearge bit error rate is derived and expressed in Gauss hypergeometic functions. Maximal ratio combining diversity technique is utilized to alleviate the deleterious effect of multipath fading. We observed that the system performance improves when the parameter η increase or decreasse in format 1 or format 2 conditions respectively.

  14. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  15. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  16. Distortion Modeling and Error Robust Coding Scheme for H.26L Video

    Institute of Scientific and Technical Information of China (English)

    CHENChuan; YUSongyu; CHENGLianji

    2004-01-01

    Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.

  17. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  18. Influences of observation errors in eddy flux data on inverse model parameter estimation

    Directory of Open Access Journals (Sweden)

    G. Lasslop

    2008-09-01

    Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO2 and H2O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO2 and H2O fluxes were in all cases negligible (R2 below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux

  19. Multiplicative ARMA models to generate hourly series of global irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Lopez, L. [Universidad de Malaga (Spain). Dpto. Lenguajes y C. Computacion; Sidrach-de-Cardona, M. [Universidad de Malaga (Spain). Dpto. Fisica Aplicada

    1998-11-01

    A methodology to generate hourly series of global irradiation is proposed. The only input parameter which is required is the monthly mean value of daily global irradiation, which is available for most locations. The procedure to obtain new series is based on the use of a multiplicative autoregressive moving-average statistical model for time series with regular and seasonal components. The multiplicative nature of these models enables capture of the two types of relationships observed in recorded hourly series of global irradiation: on the one hand, the relationship between the value at one hour and the value at the previous hour; and on the other hand, the relationship between the value at one hour in one day and the value at the same hour in the previous day. In this paper the main drawback which arises when using these models to generate new series is solved: namely, the need for available recorded series in order to obtain the three parameters contained in the statistical ARMA model which is proposed (autoregressive coefficient, moving-average coefficient and variance of the error term). Specifically, expressions which enable estimation of these parameters using only monthly mean values of daily global irradiation are proposed in this paper. (author)

  20. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  1. 方位多相位中心SAR信号重建误差分析%Reconstruction error of azimuth multiple-phase-center

    Institute of Scientific and Technical Information of China (English)

    马喜乐; 董臻; 何峰; 孙造宇; 梁甸农

    2014-01-01

    就方位多相位中心(Azimuth Multiple-Phase-Center,AMPC)合成孔径雷达(Synthetic Aperture Radar,SAR)系统的阵列误差对信号重建性能的影响进行分析。将阵列误差建模为随机过程,结合最小二乘(Least-Square,LS)算法,推导了AMPC SAR误差功率谱的解析表达式,进而得到了AMPC SAR的信噪比与方位模糊比的解析表达式。仿真实验验证了理论分析的正确性。分析指出,随着系统脉冲重复频率的升高,有必要通过减小重建系数以实现重建性能的提升。分析方法与结果对AMPC SAR系统设计以及图像质量预估提供有效支撑。%Influence of array errors of azimuth multiple-phase-center (AMPC)synthetic aperture radar (SAR)on signal reconstruction performance is investigated.The array errors were modeled as stochastic process.In combination with least-square (LS)algorithm,the analytical expression of the reconstruction error power spectrum was derived.Then,the analytical expressions of signal to noise ratio (SNR)and azimuth ambiguity to signal ratio (AASR)were obtained.Experiment results confirm the validity of the theoretical analyses.Analytical results indicate that the image quality of AMPC SAR can be improved by decreasing the reconstruction coefficient when the pulse repetition frequency (PRF)increases. The approaches and results are helpful to the system design and the image quality evaluation of AMPC SAR.

  2. Variable bit rate video traffic modeling by multiplicative multifractal model

    Institute of Scientific and Technical Information of China (English)

    Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu

    2006-01-01

    Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.

  3. Fixing Geometric Errors on Polygonal Models: A Survey

    Institute of Scientific and Technical Information of China (English)

    Tao Ju

    2009-01-01

    Polygonal models are popular representations of 3D objects. The use of polygonal models in computational applications often requires a model to properly bound a 3D solid. That is, the polygonal model needs to be closed, manifold, and free of self-intersections. This paper surveys a sizeable literature for repairing models that do not satisfy this criteria, focusing on categorizing them by their methodology and capability. We hope to offer pointers to further readings for researchers and practitioners, and suggestions of promising directions for future research endeavors.

  4. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  5. On the Numerical Modelling and Error Compensation for General Gough-Stewart Platform

    Directory of Open Access Journals (Sweden)

    Eusebio Hernandez

    2014-11-01

    Full Text Available Parallel robots are specially designed to perform high-precision tasks. Nevertheless, manufacturing, assembling and control issues can reduce their capacity to perform adequately. Observing the acquired measurement data with high-precision devices - such as laser-based instruments - it is not surprising that the error data follows patterns or have a structure because, in many cases, the greatest error comes from a mechanical bias introduced by manufacturing issues. Even though we cannot determine with certainty where the error comes from, a pattern in the measured data suggests that it is feasible that it can be modelled and corrected - in a significant proportion - by purely software applications, without the need of disassembling or re-manufacturing any component. This work deals with the problem of finding a mathematical model which adequately fits the error data from the legs of a general Gough-Stewart platform. Hence, we obtain an expression which can be subtracted from the control parameters in order to compensate the inherent mechanical error in the legs. The purpose of this article is two-fold: 1 to present numerical results of the beneficial effects of the error compensation in the legs as well as in the end-effector, and 2 to introduce a numerical methodology to find a model for error compensation and to numerically simulate its effects. Numerical, graphical and statistical evidence of the error improvements, according this methodology, is provided.

  6. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  7. Comparison Experiments of Different Model Error Schemes in Ensemble Kalman Filter Soil Moisture Assimilation

    Science.gov (United States)

    Nie, Suping; Zhu, Jiang; Luo, Yong

    2010-05-01

    The purpose of this study is to explore the performances of different model error scheme in soil moisture data assimilation. Based on the ensemble Kalman filter (EnKF) and the atmosphere-vegetation interaction model (AVIM), point-scale analysis results for three schemes, 1) covariance inflation (CI), 2) direct random disturbance (DRD), and 3) error source random disturbance (ESRD), are combined under conditions of different observational error estimations, different observation layers, and different observation intervals using a series of idealized experiments. The results shows that all these schemes obtain good assimilation results when the assumed observational error is an accurate statistical representation of the actual error used to perturb the original truth value, and the ESRD scheme has the least root mean square error (RMSE). Overestimation or underestimation of the observational errors can affect the assimilation results of CI and DRD schemes sensitively. The performances of these two schemes deteriorate obviously while the ESRD scheme keeps its capability well. When the observation layers or observation interval increase, the performances of both CI and DRD schemes decline evidently. But for the ESRD scheme, as it can assimilate multi-layer observations coordinately, the increased observations improve the assimilation results further. Moreover, as the ESRD scheme contains a certain amount of model error estimation functions in its assimilation process, it also has a good performance in assimilating sparse-time observations.

  8. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  9. Reporting error in weight and its implications for bias in economic models.

    Science.gov (United States)

    Cawley, John; Maclean, Johanna Catherine; Hammer, Mette; Wintfeld, Neil

    2015-12-01

    Most research on the economic consequences of obesity uses data on self-reported weight, which contains reporting error that has the potential to bias coefficient estimates in economic models. The purpose of this paper is to measure the extent and characteristics of reporting error in weight, and to examine its impact on regression coefficients in models of the healthcare consequences of obesity. We analyze data from the National Health and Nutrition Examination Survey (NHANES) for 2003-2010, which includes both self-reports and measurements of weight and height. We find that reporting error in weight is non-classical: underweight respondents tend to overreport, and overweight and obese respondents tend to underreport, their weight, with underreporting increasing in measured weight. This error results in roughly 1 out of 7 obese individuals being misclassified as non-obese. Reporting error is also correlated with other common regressors in economic models, such as education. Although it is a common misconception that reporting error always causes attenuation bias, comparisons of models that use self-reported and measured weight confirm that reporting error can cause upward bias in coefficient estimates. For example, use of self-reports leads to overestimates of the probability that an obese man uses a prescription drug, has a healthcare visit, or has a hospital admission. These findings underscore that models of the consequences of obesity should use measurements of weight, when available, and that social science datasets should measure weight rather than simply ask subjects to report their weight.

  10. Intelligent control using multiple models based on on-line learning

    Institute of Scientific and Technical Information of China (English)

    Junyong ZHAI; Shumin FEI; Feipeng DA

    2006-01-01

    In this paper we deal with the problem of plants with large parameter variations under different operating modes. A novel intelligent control algorithm based on multiple models is proposed to improve the dynamical response performance. At the same time adaptive model bank is applied to establish models without prior system information.Multiple models and corresponding controllers are automatically established on-line by a conventionally adaptive model and a re-initialized one. A best controller is chosen by the performance function at every instant. The closed-loop system's stability and asymptotical convergence of tracking error can be guaranteed. Simulation results have confirmed the validity of the proposed method.

  11. Statistical analysis of error propagation from radar rainfall to hydrological models

    Directory of Open Access Journals (Sweden)

    D. Zhu

    2013-04-01

    Full Text Available This study attempts to characterise the manner with which inherent error in radar rainfall estimates input influence the character of the stream flow simulation uncertainty in validated hydrological modelling. An artificial statistical error model described by Gaussian distribution was developed to generate realisations of possible combinations of normalised errors and normalised bias to reflect the identified radar error and temporal dependence. These realisations were embedded in the 5 km/15 min UK Nimrod radar rainfall data and used to generate ensembles of stream flow simulations using three different hydrological models with varying degrees of complexity, which consists of a fully distributed physically-based model MIKE SHE, a semi-distributed, lumped model TOPMODEL and the unit hydrograph model PRTF. These models were built for this purpose and applied to the Upper Medway Catchment (220 km2 in South-East England. The results show that the normalised bias of the radar rainfall estimates was enhanced in the simulated stream flow and also the dominate factor that had a significant impact on stream flow simulations. This preliminary radar-error-generation model could be developed more rigorously and comprehensively for the error characteristics of weather radars for quantitative measurement of rainfall.

  12. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    OpenAIRE

    Wild, Oliver; Prather, Michael J.

    2006-01-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quant...

  13. Modeling data revisions : Measurement error and dynamics of "true" values

    NARCIS (Netherlands)

    Jacobs, Jan P. A. M.; van Norden, Simon

    2011-01-01

    Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revis

  14. Modeling data revisions : Measurement error and dynamics of "true" values

    NARCIS (Netherlands)

    Jacobs, Jan P. A. M.; van Norden, Simon

    2011-01-01

    Policy makers must base their decisions on preliminary and partially revised data of varying reliability. Realistic modeling of data revisions is required to guide decision makers in their assessment of current and future conditions. This paper provides a new framework with which to model data revis

  15. On the importance of measurement error correlations in data assimilation for integrated hydrological models

    Science.gov (United States)

    Camporese, Matteo; Botto, Anna

    2017-04-01

    Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows us to integrate multisource observation data in modeling predictions and, in doing so, to reduce uncertainty. For this reason, data assimilation has been recently the focus of much attention also for physically-based integrated hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). One of the typical assumptions in these studies is that the measurement errors are uncorrelated, whereas in certain situations it is reasonable to believe that some degree of correlation occurs, due for example to the fact that a pair of sensors share the same soil type. The goal of this study is to show if and how the measurement error correlations between different observation data play a significant role on assimilation results in a real-world application of an integrated hydrological model. The model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope. The physical model, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m, and width of 2 m. The hillslope is equipped with sensors to monitor the pressure head and soil moisture responses to a series of generated rainfall events applied onto a 60 cm thick sand layer overlying a sandy clay soil. The measurement network is completed by two tipping bucket flow gages to measure the two components (subsurface and surface) of the outflow. By collecting

  16. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  17. Robust Modeling of Low-Cost MEMS Sensor Errors in Mobile Devices Using Fast Orthogonal Search

    Directory of Open Access Journals (Sweden)

    M. Tamazin

    2013-01-01

    Full Text Available Accessibility to inertial navigation systems (INS has been severely limited by cost in the past. The introduction of low-cost microelectromechanical system-based INS to be integrated with GPS in order to provide a reliable positioning solution has provided more wide spread use in mobile devices. The random errors of the MEMS inertial sensors may deteriorate the overall system accuracy in mobile devices. These errors are modeled stochastically and are included in the error model of the estimated techniques used such as Kalman filter or Particle filter. First-order Gauss-Markov model is usually used to describe the stochastic nature of these errors. However, if the autocorrelation sequences of these random components are examined, it can be determined that first-order Gauss-Markov model is not adequate to describe such stochastic behavior. A robust modeling technique based on fast orthogonal search is introduced to remove MEMS-based inertial sensor errors inside mobile devices that are used for several location-based services. The proposed method is applied to MEMS-based gyroscopes and accelerometers. Results show that the proposed method models low-cost MEMS sensors errors with no need for denoising techniques and using smaller model order and less computation, outperforming traditional methods by two orders of magnitude.

  18. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  19. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  20. Statistical modeling and analysis of the influence of antenna polarization error on received power

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The problem of statistical modeling of antenna polarization error is studied and the statistical characteristics of antenna's received power are analyzed. A novel Stokes-vector-based method is presented to describe the conception of antenna's polarization purity. Statistical model of antenna's polarization error in polarization domain is then built up. When an antenna with polarization error of uniform distribution is illuminated by an arbitrary polarized incident field, the probability density of antenna's received power is derived analytically. Finally, a group of curves of deviation and standard deviation of received power are plotted numerically.

  1. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-06-28

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.

  2. Modelling soft error probability in firmware: A case study

    African Journals Online (AJOL)

    A rough and notional schematic of the components involved are supplied in ..... To date, this claim of potential electromagnetic interference is entirely a ... single spike case will illuminate the probabilistic model needed for the bursty case. For.

  3. Compliance Modeling and Error Compensation of a 3-Parallelogram Lightweight Robotic Arm

    DEFF Research Database (Denmark)

    2015-01-01

    This paper presents compliance modeling and error compensation for lightweight robotic arms built with parallelogram linkages, i.e., Π joints. The Cartesian stiffness matrix is derived using the virtual joint method. Based on the developed stiffness model, a method to compensate the compliance...... error is introduced, being illustrated with a 3-parallelogram robot in the application of pick-and-place operation. The results show that this compensation method can effectively improve the operation accuracy....

  4. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2014-09-30

    suitable coarse-grained variables is a necessary but not sufficient condition for this predictive skill, and 4 elementary examples are given here...issue in contemporary applied mathematics is the development of simpler dynamical models for a reduced subset of variables in complex high...In this article I developed a new practical framework of creating a stochastically parameterized reduced model for slow variables of complex

  5. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  6. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  7. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Deepak Bhatt

    2012-07-01

    Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.

  8. Vertical mixing in atmospheric tracer transport models: error characterization and propagation

    Directory of Open Access Journals (Sweden)

    C. Gerbig

    2008-02-01

    Full Text Available Imperfect representation of vertical mixing near the surface in atmospheric transport models leads to uncertainties in modelled tracer mixing ratios. When using the atmosphere as an integrator to derive surface-atmosphere exchange from mixing ratio observations made in the atmospheric boundary layer, this uncertainty has to be quantified and taken into account. A comparison between radiosonde-derived mixing heights and mixing heights derived from ECMWF meteorological data during May–June 2005 in Europe revealed random discrepancies of about 40% for the daytime with insignificant bias errors, and much larger values approaching 100% for nocturnal mixing layers with bias errors also exceeding 50%. The Stochastic Time Inverted Lagrangian Transport (STILT model was used to propagate this uncertainty into CO2 mixing ratio uncertainties, accounting for spatial and temporal error covariance. Average values of 3 ppm were found for the 2 month period, indicating that this represents a large fraction of the overall uncertainty. A pseudo data experiment shows that the error propagation with STILT avoids biases in flux retrievals when applied in inversions. The results indicate that flux inversions employing transport models based on current generation meteorological products have misrepresented an important part of the model error structure likely leading to biases in the estimated mean and uncertainties. We strongly recommend including the solution presented in this work: better, higher resolution atmospheric models, a proper description of correlated random errors, and a modification of the overall sampling strategy.

  9. Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Baoliang Sun

    2016-11-01

    Full Text Available An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN to solve the multi-node target tracking problem of wireless sensor networks (WSNs. Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs.

  10. Rank-based model selection for multiple ions quantum tomography

    Science.gov (United States)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-10-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.

  11. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... indicator simulation, we produce many realizations of model structure that are consistent with observed datasets and prior knowledge. Given estimates of model structural uncertainty, we incorporate hydrologic observations to evaluate the errors in hydrologic parameter or prediction errors that occur when...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...

  12. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

    Science.gov (United States)

    Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

    2014-01-01

    The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

  13. Optimal control design that accounts for model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  14. On the modelling of excitations in geared systems by transmission errors

    Science.gov (United States)

    Velex, P.; Ajmi, M.

    2006-03-01

    This paper introduces an original theoretical approach to the modelling of pinion-gear excitations valid for three-dimensional models of single-stage geared transmissions. Shape deviations and errors on gears are considered and the associated equations of motion account for time-varying mesh stiffness, and also torsional, flexural and axial couplings. Starting from the instantaneous contact conditions between the teeth, the equations of motion are re-formulated in terms of quasi-static transmission errors under load and no-load transmission errors. The range of application of transmission error-based formulations is analysed and some new equations are proposed which make it possible to introduce rigorously meshing excitations via transmission errors. Using an extended finite element model of a spur and helical gear test rig, the dynamic results from the formulations based on transmission errors are compared with the reference solutions. Both sets of results are found to be in close agreement, thus validating the proposed theory. The paper concludes with a critical analysis of the interests and limitations concerning the concept of transmission errors as excitation terms in gear dynamics.

  15. A Nonlinear Multiparameters Temperature Error Modeling and Compensation of POS Applied in Airborne Remote Sensing System

    Directory of Open Access Journals (Sweden)

    Jianli Li

    2014-01-01

    Full Text Available The position and orientation system (POS is a key equipment for airborne remote sensing systems, which provides high-precision position, velocity, and attitude information for various imaging payloads. Temperature error is the main source that affects the precision of POS. Traditional temperature error model is single temperature parameter linear function, which is not sufficient for the higher accuracy requirement of POS. The traditional compensation method based on neural network faces great problem in the repeatability error under different temperature conditions. In order to improve the precision and generalization ability of the temperature error compensation for POS, a nonlinear multiparameters temperature error modeling and compensation method based on Bayesian regularization neural network was proposed. The temperature error of POS was analyzed and a nonlinear multiparameters model was established. Bayesian regularization method was used as the evaluation criterion, which further optimized the coefficients of the temperature error. The experimental results show that the proposed method can improve temperature environmental adaptability and precision. The developed POS had been successfully applied in airborne TSMFTIS remote sensing system for the first time, which improved the accuracy of the reconstructed spectrum by 47.99%.

  16. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  17. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor

    Directory of Open Access Journals (Sweden)

    Lyvia Biagi

    2017-06-01

    Full Text Available Continuous glucose monitors (CGMs are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL sensor and compare it with the Dexcom SEVEN PLUS (7P, G4 PLATINUM (G4P, and advanced G4 for Artificial Pancreas studies (G4AP systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D, in which CGMs were worn in parallel and blood glucose (BG samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  18. Modeling And Analysis Of The Surface Roughness And Geometrical Error Using Taguchi And Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    DR.S.C.JAYSWAL

    2011-07-01

    Full Text Available This experimental work presents a technique to determine the better surface quality by controlling the surface roughness and geometrical error. In machining operations, achieving desired surface quality features of the machined product is really a challenging job. Because, these quality features are highly correlated and areexpected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects. Thus The four input process parameters such as spindle speed, depth of cut, feed rate, and stepover have been selected to minimize the surface roughness and geometrical error simultaneously by using the robustdesign concept of Taguchi L9(34 method coupled with Response surface concept. Mathematical models for surface roughness and geometrical error were obtained from response surface analysis to predict values of surface roughness and geometrical error. S/N ratio and ANOVA analyses were also performed to obtain for significant parameters influencing surface roughness and geometrical error.

  19. A hierarchical Bayes error correction model to explain dynamic effects

    NARCIS (Netherlands)

    D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2004-01-01

    textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio

  20. A MULTIPLE INTELLIGENT AGENT SYSTEM FOR CREDIT RISK PREDICTION VIA AN OPTIMIZATION OF LOCALIZED GENERALIZATION ERROR WITH DIVERSITY

    Institute of Scientific and Technical Information of China (English)

    Daniel S. YEUNG; Wing W. Y. NG; Aki P. F. CHAN; Patrick P. K. CHAN; Michael FIRTH; Eric C. C. TSANG

    2007-01-01

    Company bankruptcies cost billions of dollars in losses to banks each year. Thus credit risk prediction is a critical part of a bank's loan approval decision process. Traditional financial models for credit risk prediction are no longer adequate for describing today's complex relationship between the financial health and potential bankruptcy of a company. In this work, a multiple classifier system (embedded in a multiple intelligent agent system) is proposed to predict the financial health of a company. In our model, each individual agent (classifier) makes a prediction on the likelihood of credit risk based on only partial information of the company. Each of the agents is an expert, but has limited knowledge (represented by features) about the company. The decisions of all agents are combined together to form a final credit risk prediction. Experiments show that our model out-performs other existing methods using the benchmarking Compustat American Corporations dataset.

  1. Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system

    Science.gov (United States)

    Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena

    2010-05-01

    A post-processor that accounts for the hydrologic uncertainty in a probabilistic streamflow forecast system is necessary to account for the uncertainty introduced by the hydrological model. In this study different variants of an autoregressive error model that can be used as a post-processor for short to medium range streamflow forecasts, are evaluated. The deterministic HBV model is used to form the basis for the streamflow forecast. The general structure of the error models then used as post-processor is a first order autoregressive model of the form dt = αdt-1 + σɛt where dt is the model error (observed minus simulated streamflow) at time t, α and σ are the parameters of the error model, and ɛt is the residual error described through a probability distribution. The following aspects are investigated: (1) Use of constant parameters α and σ versus the use of state dependent parameters. The state dependent parameters vary depending on the states of temperature, precipitation, snow water equivalent and simulated streamflow. (2) Use of a Standard Normal distribution for ɛt versus use of an empirical distribution function constituted through the normalized residuals of the error model in the calibration period. (3) Comparison of two different transformations, i.e. logarithmic versus square root, that are applied to the streamflow data before the error model is applied. The reason for applying a transformation is to make the residuals of the error model homoscedastic over the range of streamflow values of different magnitudes. Through combination of these three characteristics, eight variants of the autoregressive post-processor are generated. These are calibrated and validated in 55 catchments throughout Norway. The discrete ranked probability score with 99 flow percentiles as standardized thresholds is used for evaluation. In addition, a non-parametric bootstrap is used to construct confidence intervals and evaluate the significance of the results. The main

  2. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2017-06-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  3. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2016-07-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  4. Error and Uncertainty Analysis for Ecological Modeling and Simulation

    Science.gov (United States)

    2001-12-01

    Delfiner , 1999; Goovaerts, 1997; Journel and Huijbregts, 1978). These methods are based on the spatial variability theory, that is, spatial...mathematics on the sequential Gaussian simulation, the reader is referred to Chiles and Delfiner (1999) and Goovaerts (1997). 66 UI NRES...World Congress 2000. 7-12 August 2000, Kuala Lumpur Asia. (Ed. Barbara Koch). In press. Chiles, J.P. and P. Delfiner , 1999. Geostatistics: Modeling

  5. Experiments in Error Propagation within Hierarchal Combat Models

    Science.gov (United States)

    2015-09-01

    and variances of Blue MTTK, Red MTTK, and P[Blue Wins] by Experimental Design are statistically different (Wackerly, Mendenhall III and Schaeffer...2008). Although the data is not normally distributed, the t-test is robust to non-normality (Wackerly, Mendenhall III and Schaeffer 2008). There is...this is handled by transforming the predicted values with a natural logarithm (Wackerly, Mendenhall III and Schaeffer 2008). The model considers

  6. A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding

    CERN Document Server

    Holbrook, J K; Mahesh, K; Holbrook, Jennifer K.; Eiselt, Kurt P.; Mahesh, Kavi

    1994-01-01

    The development of models of human sentence processing has traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own task-specific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in parallel to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding is this: the human sente...

  7. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  8. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  9. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    with validation study, J. Nonparametric Statistics, 1995, 4: 365-394.[15]Wang, Q. H., Estimation of partial linear error-in-variables model, Jourmal of Multivariate Analysis, 1999, 69:30-64.[16]Wang, Q. H., Estimation of linear error-in-covariables models with validation data under random censorship,Journal of Multivariate Analysis, 2000, 74: 245-266.[17]Wang, Q. H., Estimation of partial linear error-in-response models with validation data, Ann. Inst. Statist. Math.,2003, 55(1): 21~39[18]Wang, Q. H., Dimension reduction in partly linear error-in-response model error-in-response models with validation data, Journal of Multivariate Analysis, 2003, 85(2): 234-252.[19]Wang, Q. H., Rao, J. N. K., Empirical likelihood-based in linear errors-in-covariables models with validation data, Biometrika, 2002, 89: 345-358.[20]Owen, A., Empirical likelihood for linear models, Ann. Statist., 1991, 19: 1725-1747.[21]Li, K. C., Sliced inverse regression for dimension reduction (with discussion), J. Amer. Statist. Assoc., 1991,86: 337-342.[22]Duan, N., Li, K. C., Slicing regression: a link-free regression method, Ann. Statist. 1991, 19, 505-530.[23]Zhu, L. X., Fang, K. T, Asymptotics for kernel estimator of sliced inverse regression, Ann. Statist., 1996, 24:1053-1068.[24]Carroll, R. J., Li, K. C., Errors in variables for nonlinear regression: dimension reduction and data visualization,J. Amer. Statist. Assoc., 1992, 87: 1040-1050.[25]Rosner, B., Willett, W. C., Spiegelman, D., Correction of logistic regression relative risk estimates and confidence intervals for systematic within-person measurement error, Statist. Med., 1989, 8: 1075-1093.[26]H(a)rdle, W., Stoke, T M., Investigating smooth multiple regression by the method of average derivatives, J. Amer.Statist. Assoc., 1989, 84: 986-995.

  10. Asteroid Models from Multiple Data Sources

    CERN Document Server

    Durech, J; Delbo, M; Kaasalainen, M; Viikinkoski, M

    2015-01-01

    In the past decade, hundreds of asteroid shape models have been derived using the lightcurve inversion method. At the same time, a new framework of 3-D shape modeling based on the combined analysis of widely different data sources such as optical lightcurves, disk-resolved images, stellar occultation timings, mid-infrared thermal radiometry, optical interferometry, and radar delay-Doppler data, has been developed. This multi-data approach allows the determination of most of the physical and surface properties of asteroids in a single, coherent inversion, with spectacular results. We review the main results of asteroid lightcurve inversion and also recent advances in multi-data modeling. We show that models based on remote sensing data were confirmed by spacecraft encounters with asteroids, and we discuss how the multiplication of highly detailed 3-D models will help to refine our general knowledge of the asteroid population. The physical and surface properties of asteroids, i.e., their spin, 3-D shape, densit...

  11. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  12. The multiple process model of goal-directed reaching revisited.

    Science.gov (United States)

    Elliott, Digby; Lyons, James; Hayes, Spencer J; Burkitt, James J; Roberts, James W; Grierson, Lawrence E M; Hansen, Steve; Bennett, Simon J

    2017-01-01

    Recently our group forwarded a model of speed-accuracy relations in goal-directed reaching. A fundamental feature of our multiple process model was the distinction between two types of online regulation: impulse control and limb-target control. Impulse control begins during the initial stages of the movement trajectory and involves a comparison of actual limb velocity and direction to an internal representation of expectations about the limb trajectory. Limb-target control involves discrete error-reduction based on the relative positions of the limb and the target late in the movement. Our model also considers the role of eye movements, practice, energy optimization and strategic behavior in limb control. Here, we review recent work conducted to test specific aspects of our model. As well, we consider research not fully incorporated into our earlier contribution. We conclude that a slightly modified and expanded version of our model, that includes crosstalk between the two forms of online regulation, does an excellent job of explaining speed, accuracy, and energy optimization in goal-directed reaching.

  13. Mixtures of multiplicative cascade models in geochemistry

    Directory of Open Access Journals (Sweden)

    F. P. Agterberg

    2007-05-01

    Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.

  14. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  15. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  16. Error detection in GPS observations by means of Multi-process models

    DEFF Research Database (Denmark)

    Thomsen, Henrik F.

    2001-01-01

    The main purpose of this article is to present the idea of using Multi-process models as a method of detecting errors in GPS observations. The theory behind Multi-process models, and double differenced phase observations in GPS is presented shortly. It is shown how to model cycle slips in the Multi...

  17. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between t

  18. Neighboring extremal optimal control design including model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1994-11-01

    The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.

  19. Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.

    Science.gov (United States)

    Bonate, Peter L

    2013-01-01

    Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.

  20. Analysis of an incomplete longitudinal composite variable using a marginalized random effects model and multiple imputation.

    Science.gov (United States)

    Gosho, Masahiko; Maruo, Kazushi; Ishii, Ryota; Hirakawa, Akihiro

    2016-11-16

    The total score, which is calculated as the sum of scores in multiple items or questions, is repeatedly measured in longitudinal clinical studies. A mixed effects model for repeated measures method is often used to analyze these data; however, if one or more individual items are not measured, the method cannot be directly applied to the total score. We develop two simple and interpretable procedures that infer fixed effects for a longitudinal continuous composite variable. These procedures consider that the items that compose the total score are multivariate longitudinal continuous data and, simultaneously, handle subject-level and item-level missing data. One procedure is based on a multivariate marginalized random effects model with a multiple of Kronecker product covariance matrices for serial time dependence and correlation among items. The other procedure is based on a multiple imputation approach with a multivariate normal model. In terms of the type-1 error rate and the bias of treatment effect in total score, the marginalized random effects model and multiple imputation procedures performed better than the standard mixed effects model for repeated measures analysis with listwise deletion and single imputations for handling item-level missing data. In particular, the mixed effects model for repeated measures with listwise deletion resulted in substantial inflation of the type-1 error rate. The marginalized random effects model and multiple imputation methods provide for a more efficient analysis by fully utilizing the partially available data, compared to the mixed effects model for repeated measures method with listwise deletion.

  1. Parametric bootstrap methods for testing multiplicative terms in GGE and AMMI models.

    Science.gov (United States)

    Forkman, Johannes; Piepho, Hans-Peter

    2014-09-01

    The genotype main effects and genotype-by-environment interaction effects (GGE) model and the additive main effects and multiplicative interaction (AMMI) model are two common models for analysis of genotype-by-environment data. These models are frequently used by agronomists, plant breeders, geneticists and statisticians for analysis of multi-environment trials. In such trials, a set of genotypes, for example, crop cultivars, are compared across a range of environments, for example, locations. The GGE and AMMI models use singular value decomposition to partition genotype-by-environment interaction into an ordered sum of multiplicative terms. This article deals with the problem of testing the significance of these multiplicative terms in order to decide how many terms to retain in the final model. We propose parametric bootstrap methods for this problem. Models with fixed main effects, fixed multiplicative terms and random normally distributed errors are considered. Two methods are derived: a full and a simple parametric bootstrap method. These are compared with the alternatives of using approximate F-tests and cross-validation. In a simulation study based on four multi-environment trials, both bootstrap methods performed well with regard to Type I error rate and power. The simple parametric bootstrap method is particularly easy to use, since it only involves repeated sampling of standard normally distributed values. This method is recommended for selecting the number of multiplicative terms in GGE and AMMI models. The proposed methods can also be used for testing components in principal component analysis.

  2. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    Science.gov (United States)

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  3. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    Science.gov (United States)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  4. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    Science.gov (United States)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  5. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  6. ANALISIS INFLASI DI SUMATERA UTARA: SUATU MODEL ERROR CORRECTION (ECM

    Directory of Open Access Journals (Sweden)

    Hafsyah Aprilia

    2012-06-01

    Full Text Available The research was conducted to determine the effect of economic variables that can explain the change or variation in the rate of inflation in the Consumer Price Index (CPI as the dependent variable. The explanatory variables (independent were used as controls are SBI, the nominal interest rate spread (SBI and the value of the rupiah against the U.S. dollar. Based on these results, according to the specific purpose of the model equations II, suggested economic actors can use SBI interest rate spread as an indicator of variations in the CPI inflation rate at intervals of 8 and 12 months, with a note that the obtained level of explanation has not shown that the optimal value

  7. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... technique in the design procedure is a suitable approach to handle these complex tasks. As there is no unied design guideline for the parallel manipulators, the study described in this thesis aims to provide a systematic analysis for this type of mechanisms in the early design stage, focusing on accuracy...... analysis and design optimization. The proposed approach is illustrated with the planar and spherical parallel manipulators. The geometric design, kinematic and dynamic analysis, kinetostatic modeling and stiness analysis are also presented. Firstly, the study on the geometric architecture and kinematic...

  8. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    Science.gov (United States)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  9. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  10. An attempt to lower sources of systematic measurement error using Hierarchical Generalized Linear Modeling (HGLM).

    Science.gov (United States)

    Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

    2014-01-01

    The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion.

  11. Performance of cumulant-based rank reduction estimator in presence of unexpected modeling errors

    Institute of Scientific and Technical Information of China (English)

    王鼎

    2015-01-01

    Compared with the rank reduction estimator (RARE) based on second-order statistics (called SOS-RARE), the RARE based on fourth-order cumulants (referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival (DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding (DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error (MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.

  12. A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching

    CERN Document Server

    Kumar, Y Satish; Talarico, Claudio; Wang, Janet; 10.1109/DATE.2005.31

    2011-01-01

    Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.

  13. Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors

    Institute of Scientific and Technical Information of China (English)

    Xiao-bing ZHAO; Xian ZHOU

    2012-01-01

    This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.

  14. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    Science.gov (United States)

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  15. Consistent Fundamental Matrix Estimation in a Quadratic Measurement Error Model Arising in Motion Analysis

    OpenAIRE

    Kukush, A.; Markovsky, I.; Van Huffel, S.

    2002-01-01

    Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.

  16. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  17. Identifiability of Gaussian Structural Equation Models with Same Error Variances

    CERN Document Server

    Peters, Jonas

    2012-01-01

    We consider structural equation models (SEMs) in which variables can be written as a function of their parents and noise terms (the latter are assumed to be jointly independent). Corresponding to each SEM, there is a directed acyclic graph (DAG) G_0 describing the relationships between the variables. In Gaussian SEMs with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes (assuming faithfulness). It has been shown, however, that this constitutes an exceptional case. In the case of linear functions and non-Gaussian noise, the DAG becomes identifiable. Apart from few exceptions the same is true for non-linear functions and arbitrarily distributed additive noise. In this work, we prove identifiability for a third modification: if we require all noise variables to have the same variances, again, the DAG can be recovered from the joint Gaussian distribution. Our result can be applied to the problem of causal inference. If the data follow a Gaussian SEM w...

  18. High dimensional linear regression models under long memory dependence and measurement error

    Science.gov (United States)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  19. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  20. An MEG signature corresponding to an axiomatic model of reward prediction error.

    Science.gov (United States)

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-01

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data.

  1. Active Magnetic Bearing Rotor Model Updating Using Resonance and MAC Error

    Directory of Open Access Journals (Sweden)

    Yuanping Xu

    2015-01-01

    Full Text Available Modern control techniques can improve the performance and robustness of a rotor active magnetic bearing (AMB system. Since those control methods usually rely on system models, it is important to obtain a precise rotor AMB analytical model. However, the interference fits and shrink effects of rotor AMB cause inaccuracy to the final system model. In this paper, an experiment based model updating method is proposed to improve the accuracy of the finite element (FE model used in a rotor AMB system. Modelling error is minimized by applying a numerical optimization Nelder-Mead simplex algorithm to properly adjust FE model parameters. Both the error resonance frequencies and modal assurance criterion (MAC values are minimized simultaneously to account for the rotor natural frequencies as well as for the mode shapes. Verification of the updated rotor model is performed by comparing the experimental and analytical frequency response. The close agreements demonstrate the effectiveness of the proposed model updating methodology.

  2. Admissibilities of linear estimator in a class of linear models with a multivariate t error variable

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.

  3. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  4. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  5. Multiple Temperature Model for Near Continuum Flows

    Energy Technology Data Exchange (ETDEWEB)

    XU, Kun; Liu, Hongwei [Hong Kong University of Science and Technology, Kowloon (Hong Kong); Jiang, Jianzheng [Chinese Academy ofSciences, Beijing (China)

    2007-09-15

    In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime.

  6. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  7. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  8. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  9. Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair

    Science.gov (United States)

    Goodson, Troy D.

    2013-01-01

    The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.

  10. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  11. A flexible additive inflation scheme for treating model error in ensemble Kalman Filters

    Science.gov (United States)

    Sommer, Matthias; Janjic, Tijana

    2017-04-01

    Data assimilation algorithms require an accurate estimate of the uncertainty of the prior, background, field. However, the background error covariance derived from the ensemble of numerical model simulations does not adequately represent the uncertainty of it. This is partially due to the sampling error that arises from the use of a small number of ensemble members to represent the background error covariance. It is also partially a consequence of the fact that the model does not represent its own error. Several mechanisms have been introduced so far aiming at alleviating the detrimental e ffects of misrepresented ensemble covariances, allowing for the successful implementation of ensemble data assimilation techniques for atmospheric dynamics. One of the established approaches in ensemble data assimilation is additive inflation which perturbs each ensemble member with a sample from a given distribution. This results in a fixed rank of the model error covariance matrix. Here, a more flexible approach is suggested where the model error samples are treated as additional synthetic ensemble members which are used in the update step of data assimilation but are not forecast. In this way, the rank of the model error covariance matrix can be chosen independently of the ensemble. The eff ect of this altered additive inflation method on the performance of the filter is analyzed here in an idealised experiment. It is shown that the additional synthetic ensemble members can make it feasible to achieve convergence in an otherwise divergent setting of data assimilation. The use of this method also allows for a less stringent localization radius.

  12. High resolution modeling of CO2 over Europe: implications for representation errors of satellite retrievals

    Directory of Open Access Journals (Sweden)

    T. Koch

    2010-01-01

    Full Text Available Satellite retrievals for column CO2 with better spatial and temporal sampling are expected to improve the current surface flux estimates of CO2 via inverse techniques. However, the spatial scale mismatch between remotely sensed CO2 and current generation inverse models can induce representation errors, which can cause systematic biases in flux estimates. This study is focused on estimating these representation errors associated with utilization of satellite measurements in global models with a horizontal resolution of about 1 degree or less. For this we used simulated CO2 from the high resolution modeling framework WRF-VPRM, which links CO2 fluxes from a diagnostic biosphere model to a weather forecasting model at 10×10 km2 horizontal resolution. Sub-grid variability of column averaged CO2, i.e. the variability not resolved by global models, reached up to 1.2 ppm with a median value of 0.4 ppm. Statistical analysis of the simulation results indicate that orography plays an important role. Using sub-grid variability of orography and CO2 fluxes as well as resolved mixing ratio of CO2, a linear model can be formulated that could explain about 50% of the spatial patterns in the systematic (bias or correlated error component of representation error in column and near-surface CO2 during day- and night-times. These findings give hints for a parameterization of representation error which would allow for the representation error to taken into account in inverse models or data assimilation systems.

  13. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  14. On asymptotics of t-type regression estimation in multiple linear model

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    We consider a robust estimator (t-type regression estimator) of multiple linear regression model by maximizing marginal likelihood of a scaled t-type error t-distribution.The marginal likelihood can also be applied to the de-correlated response when the withinsubject correlation can be consistently estimated from an initial estimate of the model based on the independent working assumption. This paper shows that such a t-type estimator is consistent.

  15. Binary variable multiple-model multiple imputation to address missing data mechanism uncertainty: application to a smoking cessation trial.

    Science.gov (United States)

    Siddique, Juned; Harel, Ofer; Crespi, Catherine M; Hedeker, Donald

    2014-07-30

    The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables, which formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software.

  16. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  17. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  18. A Percentile Regression Model for the Number of Errors in Group Conversation Tests.

    Science.gov (United States)

    Liski, Erkki P.; Puntanen, Simo

    A statistical model is presented for analyzing the results of group conversation tests in English, developed in a Finnish university study from 1977 to 1981. The model is illustrated with the findings from the study. In this study, estimates of percentile curves for the number of errors are of greater interest than the mean regression line. It was…

  19. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  20. Automated evolutionary restructuring of workflows to minimise errors via stochastic model checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter

    2014-01-01

    This paper presents a framework for the automated restructuring of workflows that allows one to minimise the impact of errors on a production workflow. The framework allows for the modelling of workflows by means of a formalised subset of the Business Process Modelling and Notation (BPMN) language...

  1. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  2. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effec

  3. Modelling for registration of remotely sensed imagery when reference control points contain error

    Institute of Scientific and Technical Information of China (English)

    GE; Yong; Leung; Yee; MA; Jianghong; WANG; Jinfeng

    2006-01-01

    Reference control points (RCPs) used in establishing the regression model in the registration or geometric correction of remote sensing images are generally assumed to be "perfect". That is, the RCPs, as explanatory variables in the regression equation, are accurate and the coordinates of their locations have no errors. Thus ordinary least squares (OLS) estimator has been applied extensively to the registration or geometric correction of remotely sensed data. However, this assumption is often invalid in practice because RCPs always contain errors. Moreover, the errors are actually one of the main sources which lower the accuracy of geometric correction of an uncorrected image. Under this situation, the OLS estimator is biased. It cannot handle explanatory variables with errors and cannot propagate appropriately errors from the RCPs to the corrected image. Therefore, it is essential to develop new feasible methods to overcome such a problem. This paper introduces a consistent adjusted least squares (CALS) estimator and proposes relaxed consistent adjusted least squares (RCALS) estimator, with the latter being more general and flexible, for geometric correction or registration. These estimators have good capability in correcting errors contained in the RCPs, and in propagating appropriately errors of the RCPs to the corrected image with and without prior information.The objective of the CALS and proposed RCALS estimators is to improve the accuracy of measurement value by weakening the measurement errors. The conceptual arguments are substantiated by a real remotely sensed data. Compared to the OLS estimator, the CALS and RCALS estimators give a superior overall performance in estimating the regression coefficients and variance of measurement errors.

  4. Quality specifications for glucose meters: assessment by simulation modeling of errors in insulin dose.

    Science.gov (United States)

    Boyd, J C; Bruns, D E

    2001-02-01

    Proposed quality specifications for glucose meters allow results to be in error by 5-10% or more of the "true" concentration. Because meters are used as aids in the adjustment of insulin doses, we aimed to characterize the quantitative effect of meter error on the ability to identify the insulin dose appropriate for the true glucose concentration. Using Monte Carlo simulation, we generated random "true" glucose values within defined intervals. These values were converted to "measured" glucose values using mathematical models of glucose meters having defined imprecision (CV) and bias. For each combination of bias and imprecision, 10,000-20,000 true and measured glucose concentrations were matched with the corresponding insulin doses specified by selected insulin-dosing regimens. Discrepancies in prescribed doses were counted and their frequencies plotted in relation to bias and imprecision. For meters with a total analytical error of 5%, dosage errors occurred in approximately 8-23% of insulin doses. At 10% total error, 16-45% of doses were in error. Large errors of insulin dose (two-step or greater) occurred >5% of the time when the CV and/or bias exceeded 10-15%. Total dosage error rates were affected only slightly by choices of sliding scale among insulin dosage rules or by the range of blood glucose. To provide the intended insulin dosage 95% of the time required that both the bias and the CV of the glucose meter be <1% or <2%, depending on mean glucose concentrations and the rules for insulin dosing. Glucose meters that meet current quality specifications allow a large fraction of administered insulin doses to differ from the intended doses. The effects of such dosage errors on blood glucose and on patient outcomes require study.

  5. Multiple Retrieval Models and Regression Models for Prior Art Search

    CERN Document Server

    Lopez, Patrice

    2009-01-01

    This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.

  6. Systematic Geometric Error Modeling for Workspace Volumetric Calibration of a 5-axis Turbine Blade Grinding Machine

    Institute of Scientific and Technical Information of China (English)

    Abdul Wahid Khan; Chen Wuyi

    2010-01-01

    A systematic geometric model has been presented for calibration of a newly designed 5-axis turbine blade grinding machine.This machine is designed to serve a specific purpose to attain high accuracy and high efficiency grinding of turbine blades by eliminating the hand grinding process.Although its topology is RPPPR (P:prismatic;R:rotary),its design is quite distinct from the competitive machine tools.As error quantification is the only way to investigate,maintain and improve its accuracy,calibration is recommended for its performance assessment and acceptance testing.Systematic geometric error modeling technique is implemented and 52 position dependent and position independent errors are identified while considering the machine as five rigid bodies by eliminating the set-up errors ofworkpiece and cutting tool.39 of them are found to have influential errors and are accommodated for finding the resultant effect between the cutting tool and the workpiece in workspace volume.Rigid body kinematics techniques and homogenous transformation matrices are used for error synthesis.

  7. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  8. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    Science.gov (United States)

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  9. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    Science.gov (United States)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors

  10. Research on identifying the dynamic error model of strapdown gyro on 3-axis turntable

    Institute of Scientific and Technical Information of China (English)

    WANG Hai; REN Shun-qing; WANG Chang-hong

    2005-01-01

    The dynamic errors of gyros are the important error sources of a strapdown inertial navigation system.In order to identify the dynamic error model coefficients accurately, the static erTor model coefficients which lay a foundation for compensating while identifying the dynamic error model are identified in the gravity acceleration fields by using angular position function of the three-axis turntable. The angular acceleration and angular velocity are excited on the input, output and spin axis of the gyros when the outer axis and the middle axis of a threeaxis turntable are in the uniform angular velocity state simultaneously, while the inner axis of the turntable is in different static angular positions. 8 groups of data are sampled when the inner axis is in 8 different angular positions. These data are the function of the middle axis positions and the inner axis positions. For these data, harmonic analysis method is applied two times versus the middle axis positions and inner axis positions respectively so that the dynamic error model coefficients are finally identified through the least square method. In the meantime the optimal angular velocity of the outer axis and the middle axis are selected by computing the determination value of the information matrix.

  11. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  12. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  13. Medication errors in the intensive care unit: literature review using the SEIPS model.

    Science.gov (United States)

    Frith, Karen H

    2013-01-01

    Medication errors in intensive care units put patients at risk for injury or death every day. Safety requires an organized and systematic approach to improving the tasks, technology, environment, and organizational culture associated with medication systems. The Systems Engineering Initiative for Patient Safety model can help leaders and health care providers understand the complicated and high-risk work associated with critical care. Using this model, the author combines a human factors approach with the well-known structure-process-outcome model of quality improvement to examine research literature. The literature review reveals that human factors, including stress, high workloads, knowledge deficits, and performance deficits, are associated with medication errors. Factors contributing to medication errors are frequent interruptions, communication problems, and poor fit of health information technology to the workflow of providers. Multifaceted medication safety interventions are needed so that human factors and system problems can be addressed simultaneously.

  14. An EOQ model for imperfect quality items with partial backordering under screening errors

    Directory of Open Access Journals (Sweden)

    Ehsan Sharifi

    2015-12-01

    Full Text Available In practice, when a lot size received, an inspection process is necessary to identify the defective items. In addition, the inspection process itself is not error-free and it may contain misclassification errors. In this paper, an economic order quantity model for imperfect quality items with partial backordering under screening errors is studied. The objective is to maximize the expected annual profit by optimizing the order size and the maximum number of backorder units. Also, the aim of this paper is to develop a general and practical model that is more realistic in the competitive commercial situations. For authenticity of the developed model, a case study and a numerical example are illustrated, and the sensitivity analysis is also carried out.

  15. VARYING COEFFICIENT MODELS FOR DATA WITH AUTO-CORRELATED ERROR PROCESS.

    Science.gov (United States)

    Chen, Zhao; Li, Runze; Li, Yan

    2015-04-01

    Varying coefficient model has been popular in the literature. In this paper, we propose a profile least squares estimation procedure to its regression coefficients when its random error is an auto-regressive (AR) process. We further study the asymptotic properties of the proposed procedure, and establish the asymptotic normality for the resulting estimate. We show that the resulting estimate for the regression coefficients has the same asymptotic bias and variance as the local linear estimate for varying coefficient models with independent and identically distributed observations. We apply the SCAD variable selection procedure (Fan and Li, 2001) to reduce model complexity of the AR error process. Numerical comparison and finite sample performance of the resulting estimate are examined by Monte Carlo studies. Our simulation results demonstrate the proposed procedure is much more efficient than the one ignoring the error correlation. The proposed methodology is illustrated by a real data example.

  16. The mean error estimation of TOPSIS method using a fuzzy reference models

    Directory of Open Access Journals (Sweden)

    Wojciech Sałabun

    2013-04-01

    Full Text Available The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS is a commonly used multi-criteria decision-making method. A number of authors have proposed improvements, known as extensions, of the TOPSIS method, but these extensions have not been examined with respect to accuracy. Accuracy estimation is very difficult because reference values for the obtained results are not known, therefore, the results of each extension are compared to one another. In this paper, the author propose a new method to estimate the mean error of TOPSIS with the use of a fuzzy reference model (FRM. This method provides reference values. In experiments involving 1,000 models, 28 million cases are simulated to estimate the mean error. Results of four commonly used normalization procedures were compared. Additionally, the author demonstrated the relationship between the value of the mean error and the nonlinearity of models and a number of alternatives.

  17. Biases in atmospheric CO2 estimates from correlated meteorology modeling errors

    Science.gov (United States)

    Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.

    2015-03-01

    Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is

  18. Evaluation of multiple-sphere head models for MEG source localization

    Energy Technology Data Exchange (ETDEWEB)

    Lalancette, M; Cheyne, D [Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario M5G 1X8 (Canada); Quraan, M, E-mail: marc.lalancette@sickkids.ca, E-mail: douglas.cheyne@utoronto.ca [Krembil Neuroscience Centre, Toronto Western Research Institute, University Health Network, Toronto, Ontario M5T 2S8 (Canada)

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  19. Evaluation of multiple-sphere head models for MEG source localization.

    Science.gov (United States)

    Lalancette, M; Quraan, M; Cheyne, D

    2011-09-07

    Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.

  20. In vivo models of multiple system atrophy.

    Science.gov (United States)

    Fernagut, Pierre-Olivier; Ghorayeb, Imad; Diguet, Elsa; Tison, François

    2005-08-01

    Multiple system atrophy (MSA) is a sporadic adult-onset neurodegenerative disorder of unknown etiology clinically characterized by a combination of parkinsonian, pyramidal, and cerebellar signs. Levodopa-unresponsive parkinsonism is present in 80% of MSA cases, and this dominant clinical presentation (MSA-P) is associated with a combined degeneration of the substantia nigra pars compacta and the striatum in anatomically related areas. The limited knowledge of the pathophysiology of MSA and the lack of therapeutic strategies prompted the development of lesion models reproducing striatonigral degeneration, the substrate of levodopa-unresponsive parkinsonism in MSA-P. This method was carried out first in rats with two different stereotaxic strategies using either two neurotoxins ("double toxin-double lesion") or a single neurotoxin ("single toxin-double lesion"). Double-lesioned rat models showed severe motor impairment compared to those with a single nigral or striatal lesion and helped to mimic different stages of the disease. Systemic models were also developed in mice and primates using the nigral toxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and the striatal toxin 3-nitropropionic (3-NP). In mice, although MPTP reduced the subsequent sensitivity to 3-NP in a sequential lesion, simultaneous nigral and striatal insults were shown to exacerbate striatal damage. MPTP-treated monkeys displayed a significant worsening of parkinsonism and a loss of levodopa-responsiveness after the appearance of hindlimb dystonia and striatal lesion formation induced by subsequent 3-NP intoxication. The different species and intoxication paradigms used will be useful to investigate functional changes in substantia nigra and striatum and to define neuroprotective, neurorestorative, or symptomatic therapeutic strategies.

  1. Heteroscedasticity and/or Autocorrelation Checks in Longitudinal Nonlinear Models with Elliptical and AR(1)Errors

    Institute of Scientific and Technical Information of China (English)

    Chun-Zheng CAO; Jin-Guan LIN

    2012-01-01

    The aim of this paper is to study the tests for variance heterogeneity and/or autocorrelation in nonlinear regression models with elliptical and AR(1) errors.The elliptical class includes several symmetric multivariate distributions such as normal,Student-t,power exponential,among others.Several diagnostic tests using score statistics and their adjustment are constructed.The asymptotic properties,including asymptotic chi-square and approximate powers under local alternatives of the score statistics,are studied.The properties of test statistics are investigated through Monte Carlo simulations.A data set previously analyzed under normal errors is reanalyzed under elliptical models to illustrate our test methods.

  2. Estimating numerical errors due to operator splitting in global atmospheric chemistry models: Transport and chemistry

    Science.gov (United States)

    Santillana, Mauricio; Zhang, Lin; Yantosca, Robert

    2016-01-01

    We present upper bounds for the numerical errors introduced when using operator splitting methods to integrate transport and non-linear chemistry processes in global chemical transport models (CTM). We show that (a) operator splitting strategies that evaluate the stiff non-linear chemistry operator at the end of the time step are more accurate, and (b) the results of numerical simulations that use different operator splitting strategies differ by at most 10%, in a prototype one-dimensional non-linear chemistry-transport model. We find similar upper bounds in operator splitting numerical errors in global CTM simulations.

  3. A Two-Warehouse Inventory Model with Imperfect Quality and Inspection Errors

    Directory of Open Access Journals (Sweden)

    Tie Wang

    2012-09-01

    Full Text Available In this study, we establish a new inventory model with two warehouses, imperfect quality and inspection errors simultaneously. The mathematical model by maximizing the annual total profit and the solution procedure are developed. As a byproduct, we correct some technical error in developing the optimal ordering policies in the above two papers. Morevoer, we find a mild condition satisfied by most common distributions to make the ETPU(y concavity. The Proposition 1 is used to determine the optimal solution of ETPU(y.

  4. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

    Science.gov (United States)

    Yang, Yantao; Cui, Junzhi; Han, Tiansi

    2016-08-01

    Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

  5. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  6. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    Science.gov (United States)

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within

  7. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    Science.gov (United States)

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-05-01

    Evapotranspiration (ET) is an important component of the water cycle - ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001-2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the

  8. Design considerations for case series models with exposure onset measurement error.

    Science.gov (United States)

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model.

  9. Diagnosing Model Errors in Canopy-Atmosphere Exchange Using Empirical Orthogonal Functions

    Science.gov (United States)

    Drewry, D.; Albertson, J.

    2004-12-01

    Multi-layer canopy process models (MLCPMs) have been established as tools for estimating local-scale canopy-atmosphere scalar (carbon dioxide, heat and water vapor) exchange as well as testing hypotheses regarding the mechanistic functioning of complex vegetated land surfaces and the interactions between vegetation and the local microenvironment. These model frameworks are composed of a coupled set of component submodels relating radiation attenuation and absorption, photosynthesis, turbulent mixing, stomatal conductance, surface energy balance and soil and subsurface processes. Submodel formulations have been validated for a variety of ecosystems under varying environmental conditions. However, each submodel component requires parameter values that are known to vary seasonally as canopy structure changes, and over shorter periods characterized by shifts in the environmental regime. The temporal dependence of submodel parameters limits application of MLCPMs to short-term integrations for which a specific parameterization can be trusted. We present a novel application of empirical orthogonal function (EOF) analysis to the identification of the primary source of MLCPM error. Carbon dioxide (CO2) concentration profiles, a commonly collected and underutilized data source, are the observed quantity in this analysis. The technique relies on an ensemble of model runs transformed to EOF space to determine the characteristic patterns of model error associated with specific submodel parameters. These patterns provide a basis onto which error residual (modeled - measured) CO2 concentration profiles can be projected to identify the primary source of model error. Synthetic tests and application to field data collected at Duke Forest (North Carolina, USA) are presented.

  10. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    Science.gov (United States)

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  11. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Directory of Open Access Journals (Sweden)

    Jae Joon Hwang

    Full Text Available Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT, evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23% by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  12. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  13. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  14. Research on Time-series Modeling and Filtering Methods for MEMS Gyroscope Random Drift Error

    Science.gov (United States)

    Wang, Xiao Yi; Meng, Xiu Yun

    2017-03-01

    The precision of MEMS gyroscope is reduced by random drift error. This paper applied time series analysis to model random drift error of MEMS gyroscope. Based on the model established, Kalman filter was employed to compensate for the error. To overcome the disadvantages of conventional Kalman filter, Sage-Husa adaptive filtering algorithm was utilized to improve the accuracy of filtering results and the orthogonal property of innovation in the process of filtering was utilized to deal with outliers. The results showed that, compared with conventional Kalman filter, the modified filter can not only enhance filter accuracy, but also resist to outliers and this assured the stability of filtering thus improving the performance of gyroscopes.

  15. Phase errors elimination in compact digital holoscope (CDH) based on a reasonable mathematical model

    Science.gov (United States)

    Wen, Yongfu; Qu, Weijuan; Cheng, Cheeyuen; Wang, Zhaomin; Asundi, Anand

    2015-03-01

    In the compact digital holoscope (CDH) measurement process, theoretically, we need to ensure the distances between the reference wave and object wave to the hologram plane exactly match. However, it is not easy to realize in practice due to the human factors. This can lead to a phase error in the reconstruction result. In this paper, the strict theoretical analysis of the wavefront interference is performed to demonstrate the mathematical model of the phase error and then a phase errors elimination method is proposed based on the advanced mathematical model, which has a more explicit physical meaning. Experiments are carried out to verify the performance of the presented method and the results indicate that it is effective and allows the operator can make operation more flexible.

  16. On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

    CERN Document Server

    Julius,; T., Sumana; Adityakrishna, C S

    2016-01-01

    Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.

  17. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  18. A Systems Modeling Approach for Risk Management of Command File Errors

    Science.gov (United States)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  19. Modeling and characterization of multiple coupled lines

    Science.gov (United States)

    Tripathi, Alok

    1999-10-01

    A configuration-oriented circuit model for multiple coupled lines in an inhomogeneous medium is developed and presented in this thesis. This circuit model consists of a network of uncoupled transmission lines and is readily modeled with simulation tools like LIBRA© and SPICE ©. It provides an equivalent circuit representation which is simple and topologically meaningful as compared to the model based on modal decomposition. The configuration-oriented model is derived by decomposing the immittance matrices associated with an n coupled line 2n-port system. Time- and frequency- domain simulations of typical coupled line multiports are included to exemplify the utility of the model. The model is useful for the simulation and design of general single and multilayer coupled line components, such as filters and couplers, and for the investigation of signal integrity issues including crosstalk in interconnects associated with high speed digital and mixed signal electronic modules and packages. It is shown that multiconductor lossless structures in an inhomogeneous medium can be characterized by multiport time-domain reflection (MR) measurements. A synthesis technique of an equivalent lossless (non-dispersive) uniform multiconductor n coupled lines (UMCL) 2n-port system from the measured discrete time-domain reflection response is presented. This procedure is based on the decomposition of the characteristic immittance matrices of the UMCL in terms of partial mode immittance matrices. The decomposition scheme leads to the discrete transition matrix function of a UMCL 2n-port system. This in turn establishes a relationship between the normal-mode parameters of the UMCL and the measured impulse reflection and transmission response. Equivalence between the synthesis procedure presented in this thesis and the solution of a special form of an algebraic Riccati matrix equation whose solution can lead to the normal-mode parameters and a real termination network is illustrated. In

  20. Error modeling, sensitivity analysis and assembly process of a class of 3-DOF parallel kinematic machines with parallelogram struts

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents an error modeling methodology that enables the tolerance design, assembly and kinematic calibration of a class of 3-DOF parallel kinematic machines with parallelogram struts to be integrated into a unified framework. The error mapping function is formulated to identify the source errors affecting the uncompensable pose error. The sensitivity analysis in the sense of statistics is also carried out to investigate the influences of source errors on the pose accuracy. An assembly process that can effectively minimize the uncompensable pose error is proposed as one of the results of this investigation.

  1. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    Science.gov (United States)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  2. Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan

    2017-01-01

    Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  3. Role of forcing uncertainty and background model error characterization in snow data assimilation

    Directory of Open Access Journals (Sweden)

    S. V. Kumar

    2017-06-01

    Full Text Available Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  4. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  5. A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems

    Directory of Open Access Journals (Sweden)

    Fabio Dovis

    2013-07-01

    Full Text Available Advances in the development of micro-electromechanical systems (MEMS have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS and the inertial navigation system (INS integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs, stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV and the power spectral density (PSD techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade presents error sources with short-term (high-frequency and long-term (low-frequency components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.

  6. Dynamically constrained uncertainty for the Kalman filter covariance in the presence of model error

    Science.gov (United States)

    Grudzien, Colin; Carrassi, Alberto; Bocquet, Marc

    2017-04-01

    The forecasting community has long understood the impact of dynamic instability on the uncertainty of predictions in physical systems and this has led to innovative filtering design to take advantage of the knowledge of process models. The advantages of this combined approach to filtering, including both a dynamic and statistical understanding, have included dimensional reductions and robust feature selection in the observational design of filters. In the context of a perfect models we have shown that the uncertainty in prediction is damped along the directions of stability and the support of the uncertainty conforms to the dominant system instabilities. Our current work likewise demonstrates this constraint on the uncertainty for systems with model error, specifically, - we produce analytical upper bounds on the uncertainty in the stable, backwards orthogonal Lyapunov vectors in terms of the local Lyapunov exponents and the scale of the additive noise. - we demonstrate that for systems with model noise, the least upper bound on the uncertainty depends on the inverse relationship of the leading Lyapunov exponent and the observational certainty. - we numerically compute the invariant scaling factor of the model error which determines the asymptotic uncertainty. This dynamic scaling of model error is identifiable independently of the noise and is computable directly in terms of the system's dynamic invariants -- in this way the physical process itself may mollify the growth of modelling errors. For systems with strongly dissipative behaviour, we demonstrate that the growth of the uncertainty can be confined to the unstable-neutral modes independently of the filtering process, and we connect the observational design to take advantage of a dynamic characteristic of the filtering error.

  7. 系统误差条件下的多运动站无源定位性能分析%Performance Analysis for Multiple Moving Observers Passive Localization in the Presence of Systematic Errors

    Institute of Scientific and Technical Information of China (English)

    徐征; 曲长文; 王昌海

    2013-01-01

    系统误差的存在可能对无源定位的性能带来较大影响.针对多运动站得到的含有系统误差的观测量信息,推导了定位误差的克拉美罗下限(CRLB).首先根据具体系统误差模型推导测量误差的统计信息,然后根据系统误差导致不同时刻观测量相关的特点,将非对角矩阵的误差协方差矩阵写为分块矩阵的形式,并在此基础上推导其递推计算式,最后以系统误差情况下多运动站只测角无源定位为例进行定位性能的仿真分析.仿真结果表明系统误差的存在对定位误差CRLB影响较大,在定位中需要重点考虑.%The presence of systematic errors may have great effect on the performance of passive localization of multiple observers. In this paper the Cramer-Rao lower bound (CRLB) is derived with respect to the measurements corrupted by systematic errors which are obtained by multiple moving observers. First, the statistical information of the measurement error is derived according to the specific systematic error model. Because of the dependence of measurements between different time instants caused by the systematic error, the error covariance matrix is non-diagonal. The recursive calculation form is then derived by rewriting the non-diagonal error covariance matrix into the form of block matrixes. Finally, simulation is performed with respect to multiple moving observers bearings-only passive localization and related localization performance analysis is made. Simulation results indicate that systematic errors can have great effect on the CRLB of location error and more attention should be paid to it.

  8. Application of structural equation models for evaluating epidemiological data and for calculation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, P.

    2003-01-01

    observational epidemiology; measurement error; multiple endpoints structural equation models; safety standard......observational epidemiology; measurement error; multiple endpoints structural equation models; safety standard...

  9. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  10. A three-component model of the control error in manual tracking of continuous random signals.

    Science.gov (United States)

    Gerisch, Hans; Staude, Gerhard; Wolf, Werner; Bauch, Gerhard

    2013-10-01

    The performance of human operators acting within closed-loop control systems is investigated in a classic tracking task. The dependence of the control error (tracking error) on the parameters display gain, k(display), and input signal frequency bandwidth, f(g), which alter task difficulty and presumably the control delay, is studied with the aim of functionally specifying it via a model. The human operator as an element of a cascaded human-machine control system (e.g., car driving or piloting an airplane) codetermines the overall system performance. Control performance of humans in continuous tracking has been described in earlier studies. Using a handheld joystick, 10 participants tracked continuous random input signals. The parameters f(g) and k(display) were altered between experiments. Increased task difficulty promoted lengthened control delay and, consequently, increased control error.Tracking performance degraded profoundly with target deflection components above 1 Hz, confirming earlier reports. The control error is composed of a delay-induced component, a demand-based component, and a novel component: a human tracking limit. Accordingly, a new model that allows concepts of the observed control error to be split into these three components is suggested. To achieve optimal performance in control systems that include a human operator (e.g., vehicles, remote controlled rovers, crane control), (a) tasks should be kept as simple as possible to achieve shortest control delays, and (b) task components requiring higher-frequency (> 1 Hz) tracking actions should be avoided or automated by technical systems.

  11. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  12. The localization and correction of errors in models: a constraint-based approach

    OpenAIRE

    Piechowiak, S.; Rodriguez, J

    2005-01-01

    Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.

  13. Relative Efficiency of Maximum Likelihood and Other Estimators in a Nonlinear Regression Model with Small Measurement Errors

    OpenAIRE

    Kukush, Alexander; Schneeweiss, Hans

    2004-01-01

    We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.

  14. What Kind of Initial Errors Cause the Severest Prediction Uncertainty of EI Nino in Zebiak-Cane Model

    Institute of Scientific and Technical Information of China (English)

    XU Hui; DUAN Wansuo

    2008-01-01

    With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cause the largest prediction error of ENSO in the ZC model. By analyzing the behavior of CNOP- type errors, we find that for the normal states and the relatively weak EI Nino events in the ZC model, the predictions tend to yield false alarms due to the uncertainties caused by CNOP. For the relatively strong EI Nino events, the ZC model largely underestimates their intensities. Also, our results suggest that the error growth of EI Nino in the ZC model depends on the phases of both the annual cycle and ENSO. The condition during northern spring and summer is most favorable for the error growth. The ENSO prediction bestriding these two seasons may be the most difficult. A linear singular vector (LSV) approach is also used to estimate the error growth of ENSO, but it underestimates the prediction uncertainties of ENSO in the ZC model. This result indicates that the different initial errors cause different amplitudes of prediction errors though they have same magnitudes. CNOP yields the severest prediction uncertainty. That is to say, the prediction skill of ENSO is closely related to the types of initial error. This finding illustrates a theoretical basis of data assimilation. It is expected that a data assimilation method can filter the initial errors related to CNOP and improve the ENSO forecast skill.

  15. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    Science.gov (United States)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  16. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data

    Directory of Open Access Journals (Sweden)

    George O. Agogo

    2016-10-01

    Full Text Available Abstract Background Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. Methods We proposed a method to adjust for the bias in the diet-disease association (hereafter, association, due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV intakes, cigarette smoking (confounder and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Results Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. Conclusions The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  17. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of

  18. Filter design for failure detection and isolation in the presence of modeling errors and disturbances

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    1996-01-01

    The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based...

  19. Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors

    Institute of Scientific and Technical Information of China (English)

    Jun FAN

    2012-01-01

    In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.

  20. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the