Structural Model Error and Decision Relevancy
Goldsby, M.; Lusk, G.
2017-12-01
The extent to which climate models can underwrite specific climate policies has long been a contentious issue. Skeptics frequently deny that climate models are trustworthy in an attempt to undermine climate action, whereas policy makers often desire information that exceeds the capabilities of extant models. While not skeptics, a group of mathematicians and philosophers [Frigg et al. (2014)] recently argued that even tiny differences between the structure of a complex dynamical model and its target system can lead to dramatic predictive errors, possibly resulting in disastrous consequences when policy decisions are based upon those predictions. They call this result the Hawkmoth effect (HME), and seemingly use it to rebuke rightwing proposals to forgo mitigation in favor of adaptation. However, a vigorous debate has emerged between Frigg et al. on one side and another philosopher-mathematician pair [Winsberg and Goodwin (2016)] on the other. On one hand, Frigg et al. argue that their result shifts the burden to climate scientists to demonstrate that their models do not fall prey to the HME. On the other hand, Winsberg and Goodwin suggest that arguments like those asserted by Frigg et al. can be, if taken seriously, "dangerous": they fail to consider the variety of purposes for which models can be used, and thus too hastily undermine large swaths of climate science. They put the burden back on Frigg et al. to show their result has any effect on climate science. This paper seeks to attenuate this debate by establishing an irenic middle position; we find that there is more agreement between sides than it first seems. We distinguish a `decision standard' from a `burden of proof', which helps clarify the contributions to the debate from both sides. In making this distinction, we argue that scientists bear the burden of assessing the consequences of HME, but that the standard Frigg et al. adopt for decision relevancy is too strict.
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
Validation of the measurement model concept for error structure identification
International Nuclear Information System (INIS)
Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.
2004-01-01
The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations
Measurement Model Specification Error in LISREL Structural Equation Models.
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Using SMAP to identify structural errors in hydrologic models
Crow, W. T.; Reichle, R. H.; Chen, F.; Xia, Y.; Liu, Q.
2017-12-01
Despite decades of effort, and the development of progressively more complex models, there continues to be underlying uncertainty regarding the representation of basic water and energy balance processes in land surface models. Soil moisture occupies a central conceptual position between atmosphere forcing of the land surface and resulting surface water fluxes. As such, direct observations of soil moisture are potentially of great value for identifying and correcting fundamental structural problems affecting these models. However, to date, this potential has not yet been realized using satellite-based retrieval products. Using soil moisture data sets produced by the NASA Soil Moisture Active/Passive mission, this presentation will explore the use of the remotely-sensed soil moisture data products as a constraint to reject certain types of surface runoff parameterizations within a land surface model. Results will demonstrate that the precision of the SMAP Level 4 Surface and Root-Zone soil moisture product allows for the robust sampling of correlation statistics describing the true strength of the relationship between pre-storm soil moisture and subsequent storm-scale runoff efficiency (i.e., total storm flow divided by total rainfall both in units of depth). For a set of 16 basins located in the South-Central United States, we will use these sampled correlations to demonstrate that so-called "infiltration-excess" runoff parameterizations under predict the importance of pre-storm soil moisture for determining storm-scale runoff efficiency. To conclude, we will discuss prospects for leveraging this insight to improve short-term hydrologic forecasting and additional avenues for SMAP soil moisture products to provide process-level insight for hydrologic modelers.
Development and estimation of a semi-compensatory model with a flexible error structure
DEFF Research Database (Denmark)
Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo
2012-01-01
distributed error terms across alternatives at the choice stage. This study relaxes the assumption by introducing nested substitution patterns and alternatively random taste heterogeneity at the choice stage, thus equating the structural flexibility of semi-compensatory models to their compensatory...... counterparts. The proposed model is applied to off-campus rental apartment choice by students. Results show the feasibility and importance of introducing a flexible error structure into semi-compensatory models....
Kaplan, D; Wenger, R N
1993-10-01
This article presents a didactic discussion on the role of asymptotically independent test statistics and separable hypotheses as they pertain to issues of specification error, power, and model modification in the covariance structure modeling framework. Specifically, it is shown that when restricting two parameter estimates on the basis of the multivariate Wald test, the condition of asymptotic independence is necessary but not sufficient for the univariate Wald test statistics to sum to the multivariate Wald test. Instead, what is required is mutual asymptotic independence (MAI) among the univariate tests. This result generalizes to sets of multivariate tests as well. When MA1 is lacking, hypotheses can exhibit transitive relationships. It is also shown that the pattern of zero and non-zero elements of the covariance matrix of the estimates are indicative of mutually asymptotically independent test statistics, separable and transitive hypotheses. The concepts of MAI, separability, and transitivity serve as an explanatory framework for how specification errors are propagated through systems of equations and how power analyses are differentially affected by specification errors of the same magnitude. A small population study supports the major findings of this article. The question of univariate versus multivariate sequential model modification is also addressed. We argue that multivariate sequential model modification strategies do not take into account the typical lack of MA1 thus inadvertently misleading substantive investigators. Instead, a prudent approach favors univariate sequential model modification.
Multiple Imputation to Account for Measurement Error in Marginal Structural Models.
Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J
2015-09-01
Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.
DEFF Research Database (Denmark)
Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen
electromagnetic (AEM) data. Our estimates of model structural uncertainty follow a Bayesian framework that accounts for both the uncertainties in geophysical parameter estimates given AEM data, and the uncertainties in the relationship between lithology and geophysical parameters. Using geostatistical sequential......Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...
Development and estimation of a semi-compensatory model with flexible error structure
DEFF Research Database (Denmark)
Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo
2009-01-01
that alleviates these simplifying assumptions concerning (i) the number of alternatives, (ii) the representation of choice set formation, and (iii) the error structure. The proposed semi-compensatory model represents a sequence of choice set formation based on the conjunctive heuristic with correlated thresholds......, and utility-based choice accommodating alternatively nested substitution patterns across the alternatives and random taste variation across the population. The proposed model is applied to off-campus rental apartment choice of students. Results show (i) the estimated model for a universal realm of 200...... alternatives and 41 choice sets, (ii) the threshold representation as a function of individual characteristics, and (iii) the feasibility and importance of introducing a flexible error structure into semi-compensatory models....
Helle, Samuli
2018-03-01
Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Potter, Gail E; Smieszek, Timo; Sailer, Kerstin
2015-09-01
Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.
Directory of Open Access Journals (Sweden)
J. H. Spaaks
2013-09-01
Full Text Available In hydrological modeling, model structures are developed in an iterative cycle as more and different types of measurements become available and our understanding of the hillslope or watershed improves. However, with increasing complexity of the model, it becomes more and more difficult to detect which parts of the model are deficient, or which processes should also be incorporated into the model during the next development step. In this study, we first compare two methods (the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA and the Simultaneous parameter Optimization and Data Assimilation algorithm (SODA to calibrate a purposely deficient 3-D hillslope-scale model to error-free, artificially generated measurements. We use a multi-objective approach based on distributed pressure head at the soil–bedrock interface and hillslope-scale discharge and water balance. For these idealized circumstances, SODA's usefulness as a diagnostic methodology is demonstrated by its ability to identify the timing and location of processes that are missing in the model. We show that SODA's state updates provide information that could readily be incorporated into an improved model structure, and that this type of information cannot be gained from parameter estimation methods such as SCEM-UA. We then expand on the SODA result by performing yet another calibration, in which we investigate whether SODA's state updating patterns are still capable of providing insight into model structure deficiencies when there are fewer measurements, which are moreover subject to measurement noise. We conclude that SODA can help guide the discussion between experimentalists and modelers by providing accurate and detailed information on how to improve spatially distributed hydrologic models.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Development and estimation of a semi-compensatory model with flexible error structure
DEFF Research Database (Denmark)
Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo
-response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two...
Aircraft system modeling error and control error
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
Bayesian treatment of a chemical mass balance receptor model with multiplicative error structure
Keats, Andrew; Cheng, Man-Ting; Yee, Eugene; Lien, Fue-Sang
The chemical mass balance (CMB) receptor model is commonly used in source apportionment studies as a means for attributing measured airborne particulate matter (PM) to its constituent emission sources. Traditionally, error terms (e.g., measurement and source profile uncertainty) associated with the model have been treated in an additive sense. In this work, however, arguments are made for the assumption of multiplicative errors, and the effects of this assumption are realized in a Bayesian probabilistic formulation which incorporates a 'modified' receptor model. One practical, beneficial effect of the multiplicative error assumption is that it automatically precludes the possibility of negative source contributions, without requiring additional constraints on the problem. The present Bayesian treatment further differs from traditional approaches in that the source profiles are inferred alongside the source contributions. Existing knowledge regarding the source profiles is incorporated as prior information to be updated through the Bayesian inferential scheme. Hundreds of parameters are therefore present in the expression for the joint probability of the source contributions and profiles (the posterior probability density function, or PDF), whose domain is explored efficiently using the Hamiltonian Markov chain Monte Carlo method. The overall methodology is evaluated and results compared to the US Environmental Protection Agency's standard CMB model using a test case based on PM data from Fresno, California.
Overholser, Rosanna; Xu, Ronghui
2014-11-01
The effective degrees of freedom is a useful concept for describing model complexity. Recently the number of effective degrees of freedom has been shown to relate to the concept of conditional Akaike information (cAI) in the mixed effects models. This relationship was made explicit under linear mixed-effects models with i.i.d. errors, and later also extended to the generalized linear and the proportional hazards mixed models. We show that under linear mixed-effects models with correlated errors, the number of effective degrees of freedom is asymptotically equal to the trace of the usual `hat' matrix plus the number of parameters in the error covariance matrix. Using it one can define a crude version of the conditional AIC (cAIC), which is known to be inaccurate due to the estimation of unknown variance parameters. We compare this crude version to several corrected versions of cAIC for linear mixed models with correlated errors, including one that is asymptotically unbiased counting for the unknown parameters, but one which is also difficult to compute without specific programming for each case of the error correlation structure.
Spaaks, J.H.; Bouten, W.
2013-01-01
In hydrological modeling, model structures are developed in an iterative cycle as more and different types of measurements become available and our understanding of the hillslope or watershed improves. However, with increasing complexity of the model, it becomes more and more difficult to detect
Dogn, Li-hu; Li, Feng-ri; Song, Yu-wen
2015-03-01
Based on the biomass data of 276 sampling trees of Pinus koraiensis, Abies nephrolepis, Picea koraiensis and Larix gmelinii, the mono-element and dual-element additive system of biomass equations for the four conifer species was developed. The model error structure (additive vs. multiplicative) of the allometric equation was evaluated using the likelihood analysis, while nonlinear seemly unrelated regression was used to estimate the parameters in the additive system of biomass equations. The results indicated that the assumption of multiplicative error structure was strongly supported for the biomass equations of total and tree components for the four conifer species. Thus, the additive system of log-transformed biomass equations was developed. The adjusted coefficient of determination (Ra 2) of the additive system of biomass equations for the four conifer species was 0.85-0.99, the mean relative error was between -7.7% and 5.5%, and the mean absolute relative error was less than 30.5%. Adding total tree height in the additive systems of biomass equations could significantly improve model fitting performance and predicting precision, and the biomass equations of total, aboveground and stem were better than biomass equations of root, branch, foliage and crown. The precision of each biomass equation in the additive system varied from 77.0% to 99.7% with a mean value of 92.3% that would be suitable for predicting the biomass of the four natural conifer species.
Harring, Jeffrey R; Blozis, Shelley A
2014-06-01
Nonlinear mixed-effects (NLME) models remain popular among practitioners for analyzing continuous repeated measures data taken on each of a number of individuals when interest centers on characterizing individual-specific change. Within this framework, variation and correlation among the repeated measurements may be partitioned into interindividual variation and intraindividual variation components. The covariance structure of the residuals are, in many applications, consigned to be independent with homogeneous variances, [Formula: see text], not because it is believed that intraindividual variation adheres to this structure, but because many software programs that estimate parameters of such models are not well-equipped to handle other, possibly more realistic, patterns. In this article, we describe how the programmatic environment within SAS may be utilized to model residual structures for serial correlation and variance heterogeneity. An empirical example is used to illustrate the capabilities of the module.
Transition Models with Measurement Errors
Magnac, Thierry; Visser, Michael
1999-01-01
In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...
A Geomagnetic Reference Error Model
Maus, S.; Woods, A. J.; Nair, M. C.
2011-12-01
The accuracy of geomagnetic field models, such as the International Geomagnetic Reference Field (IGRF) and the World Magnetic Model (WMM), has benefitted tremendously from the ongoing series of satellite magnetic missions. However, what do we mean by accuracy? When comparing a geomagnetic reference model with a magnetic field measurement (for example of an electronic compass), three contributions play a role: (1) The instrument error, which is not subject of this discussion, (2) the error of commission, namely the error of the model coefficients themselves in representing the geomagnetic main field, and (3) the error of omission, comprising contributions to the geomagnetic field which are not represented in the reference model. The latter can further be subdivided into the omission of the crustal field and the omission of the disturbance field. Several factors have a strong influence on these errors: The error of commission primarily depends on the time elapsed since the last update of the reference model. The omission error for the crustal field depends on altitude of the measurement, while the omission error for the disturbance field has a strong latitudinal dependence, peaking under the auroral electrojets. A further complication arises for the uncertainty in magnetic declination, which is directly dependent on the strength of the horizontal field. Here, we present an error model which takes all of these factors into account. This error model will be implemented as an online-calculator, providing the uncertainty of the magnetic elements at the entered location and time.
Sofyan, Hizir; Maulia, Eva; Miftahuddin
2017-11-01
A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).
Stochastic Models of Human Errors
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Mathematical Model of the Laser Gyro Errors
Directory of Open Access Journals (Sweden)
V. N. Enin
2017-01-01
Full Text Available The paper presents the analysed and systemised results of the experimental study of laser gyro (LG errors. Determines a structure of the resulting LG error, as a linear combination of the random processes, characterizing natural and technical fluctuations of difference frequency of the counter-propagating waves, with a random constant zero shift available in the sensor readings. Formulates the requirements for the structure and form of the analytic description of the error model. Shows a generalized model of the LG fluctuation processes, on the basis of which a mathematical model of LG errors was developed as an inertial sensor.The model is represented by a system of the stochastic differential equations and functional relationships to characterize a resulting error of the sensor. The paper provides a correlation analysis of the model equations and final equations obtained for the mean-square values of the particular components, which allow us to identify the resulting error parameters. The model parameters are presented through the values of the power spectral density of the particular components. The discrete form of the model is considered, the convergence of continuous and difference equations is shown in fulfilling conditions of the limiting transition. Further research activities are defined.
International Nuclear Information System (INIS)
Nuamah, N.N.N.N.
1991-01-01
This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs
Measurement error models with interactions
Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor
2016-01-01
An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of
Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M
2015-01-01
Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.
Directory of Open Access Journals (Sweden)
Andrew D Lowther
Full Text Available Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.
Structured sparse error coding for face recognition with occlusion.
Li, Xiao-Xin; Dai, Dao-Qing; Zhang, Xiao-Fei; Ren, Chuan-Xian
2013-05-01
Face recognition with occlusion is common in the real world. Inspired by the works of structured sparse representation, we try to explore the structure of the error incurred by occlusion from two aspects: the error morphology and the error distribution. Since human beings recognize the occlusion mainly according to its region shape or profile without knowing accurately what the occlusion is, we argue that the shape of the occlusion is also an important feature. We propose a morphological graph model to describe the morphological structure of the error. Due to the uncertainty of the occlusion, the distribution of the error incurred by occlusion is also uncertain. However, we observe that the unoccluded part and the occluded part of the error measured by the correntropy induced metric follow the exponential distribution, respectively. Incorporating the two aspects of the error structure, we propose the structured sparse error coding for face recognition with occlusion. Our extensive experiments demonstrate that the proposed method is more stable and has higher breakdown point in dealing with the occlusion problems in face recognition as compared to the related state-of-the-art methods, especially for the extreme situation, such as the high level occlusion and the low feature dimension.
A vector model for error propagation
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.; Geraldo, L.P.
1989-03-01
A simple vector model for error propagation, which is entirely equivalent to the conventional statistical approach, is discussed. It offers considerable insight into the nature of error propagation while, at the same time, readily demonstrating the significance of uncertainty correlations. This model is well suited to the analysis of error for sets of neutron-induced reaction cross sections. 7 refs., 1 fig.
Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn
2010-01-01
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
An Error Analysis of Structured Light Scanning of Biological Tissue
DEFF Research Database (Denmark)
Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik
2017-01-01
This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only....... This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three....
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Measurement error models, methods, and applications
Buonaccorsi, John P
2010-01-01
Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres
Error estimation and adaptive chemical transport modeling
Directory of Open Access Journals (Sweden)
Malte Braack
2014-09-01
Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
Error Resilient Video Compression Using Behavior Models
Directory of Open Access Journals (Sweden)
Jacco R. Taal
2004-03-01
Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.
Functional Error Models to Accelerate Nested Sampling
Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.
2014-12-01
The main challenge in groundwater problems is the reliance on large numbers of unknown parameters with wide rage of associated uncertainties. To translate this uncertainty to quantities of interest (for instance the concentration of pollutant in a drinking well), a large number of forward flow simulations is required. To make the problem computationally tractable, Josset et al. (2013, 2014) introduced the concept of functional error models. It consists in two elements: a proxy model that is cheaper to evaluate than the full physics flow solver and an error model to account for the missing physics. The coupling of the proxy model and the error models provides reliable predictions that approximate the full physics model's responses. The error model is tailored to the problem at hand by building it for the question of interest. It follows a typical approach in machine learning where both the full physics and proxy models are evaluated for a training set (subset of realizations) and the set of responses is used to construct the error model using functional data analysis. Once the error model is devised, a prediction of the full physics response for a new geostatistical realization can be obtained by computing the proxy response and applying the error model. We propose the use of functional error models in a Bayesian inference context by combining it to the Nested Sampling (Skilling 2006; El Sheikh et al. 2013, 2014). Nested Sampling offers a mean to compute the Bayesian Evidence by transforming the multidimensional integral into a 1D integral. The algorithm is simple: starting with an active set of samples, at each iteration, the sample with the lowest likelihood is kept aside and replaced by a sample of higher likelihood. The main challenge is to find this sample of higher likelihood. We suggest a new approach: first the active set is sampled, both proxy and full physics models are run and the functional error model is build. Then, at each iteration of the Nested
Parameters and error of a theoretical model
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs
Soft error mechanisms, modeling and mitigation
Sayil, Selahattin
2016-01-01
This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...
Spatial Linear Mixed Models with Covariate Measurement Errors.
Li, Yi; Tang, Haicheng; Lin, Xihong
2009-01-01
Spatial data with covariate measurement errors have been commonly observed in public health studies. Existing work mainly concentrates on parameter estimation using Gibbs sampling, and no work has been conducted to understand and quantify the theoretical impact of ignoring measurement error on spatial data analysis in the form of the asymptotic biases in regression coefficients and variance components when measurement error is ignored. Plausible implementations, from frequentist perspectives, of maximum likelihood estimation in spatial covariate measurement error models are also elusive. In this paper, we propose a new class of linear mixed models for spatial data in the presence of covariate measurement errors. We show that the naive estimators of the regression coefficients are attenuated while the naive estimators of the variance components are inflated, if measurement error is ignored. We further develop a structural modeling approach to obtaining the maximum likelihood estimator by accounting for the measurement error. We study the large sample properties of the proposed maximum likelihood estimator, and propose an EM algorithm to draw inference. All the asymptotic properties are shown under the increasing-domain asymptotic framework. We illustrate the method by analyzing the Scottish lip cancer data, and evaluate its performance through a simulation study, all of which elucidate the importance of adjusting for covariate measurement errors.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Carter, Randy L; Cullings, Harry M; Carroll, Raymond J
2014-11-10
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.
Which forcing data errors matter most when modeling seasonal snowpacks?
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2014-12-01
High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly
Effect Of Oceanic Lithosphere Age Errors On Model Discrimination
DeLaughter, J. E.
2016-12-01
The thermal structure of the oceanic lithosphere is the subject of a long-standing controversy. Because the thermal structure varies with age, it governs properties such as heat flow, density, and bathymetry with important implications for plate tectonics. Though bathymetry, geoid, and heat flow for young (appears to be shallower than expected for older lithosphere indicating a plate model is a better fit. It is therefore useful to jointly fit bathymetry, geoid, and heat flow data to an inverse model to determine lithospheric structure details. Though inverse models usually include the effect of errors in bathymetry, heat flow, and geoid, they rarely examine the effects of errors in age. This may have the effect of introducing subtle biases into inverse models of the oceanic lithosphere. Because the inverse problem for thermal structure is both ill-posed and ill-conditioned, these overlooked errors may have a greater effect than expected. The problem is further complicated by the non-uniform distribution of age and errors in age estimates; for example, only 30% of the oceanic lithosphere is older than 80 MY and less than 3% is older than 150 MY. To determine the potential strength of such biases, I have used the age and error maps of Mueller et al (2008) to forward model the bathymetry for half space and GDH1 plate models. For ages less than 20 MY, both models give similar results. The errors induced by uncertainty in age are relatively large and suggest that when possible young lithosphere should be excluded when examining the lithospheric thermal model. As expected, GDH1 bathymetry converges asymptotically on the theoretical result for error-free data for older data. The resulting uncertainty is nearly as large as that introduced by errors in the other parameters; in the absence of other errors, the models can only be distinguished for ages greater than 80 MY. These results suggest that the problem should be approached with the minimum possible number of
Understanding error generation in fused deposition modeling
International Nuclear Information System (INIS)
Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David
2015-01-01
Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)
Understanding error generation in fused deposition modeling
Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David
2015-03-01
Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.
Nonclassical measurements errors in nonlinear models
DEFF Research Database (Denmark)
Madsen, Edith; Mulalic, Ismir
that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...... of a households face. In this case an important policy parameter is the effect of income (reflecting the household budget) on the choice of travel mode. This paper deals with the consequences of measurement error in income (an explanatory variable) in discrete choice models. Since it is likely to give misleading...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...
Effect of GPS errors on Emission model
DEFF Research Database (Denmark)
Lehmann, Anders; Gross, Allan
n this paper we will show how Global Positioning Services (GPS) data obtained from smartphones can be used to model air quality in urban settings. The paper examines the uncertainty of smartphone location utilising GPS, and ties this location uncertainty to air quality models. The results presented...... in this paper indicates that the location error from using smartphones is within the accuracy needed to use the location data in air quality modelling. The nature of smartphone location data enables more accurate and near real time air quality modelling and monitoring. The location data is harvested from user...
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Inference for One-Way ANOVA with Equicorrelation Error Structure
Directory of Open Access Journals (Sweden)
Weiyan Mu
2014-01-01
Full Text Available We consider inferences in a one-way ANOVA model with equicorrelation error structures. Hypotheses of the equality of the means are discussed. A generalized F-test has been proposed by in the literature to compare the means of all populations. However, they did not discuss the performance of that test. We propose two methods, a generalized pivotal quantities-based method and a parametric bootstrap method, to test the hypotheses of equality of the means. We compare the empirical performance of the proposed tests with the generalized F-test. It can be seen from the simulation results that the generalized F-test does not perform well in terms of Type I error rate, and the proposed tests perform much better. We also provide corresponding simultaneous confidence intervals for all pair-wise differences of the means, whose coverage probabilities are close to the confidence level.
Directory of Open Access Journals (Sweden)
Tea Ya. Danelyan
2014-01-01
Full Text Available The article states the general principles of structural modeling in aspect of the theory of systems and gives the interrelation with other types of modeling to adjust them to the main directions of modeling. Mathematical methods of structural modeling, in particular method of expert evaluations are considered.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Varying coefficients model with measurement error.
Li, Liang; Greene, Tom
2008-06-01
We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Modeling human response errors in synthetic flight simulator domain
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Error Propagation in Equations for Geochemical Modeling of ...
Indian Academy of Sciences (India)
This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling ...
the sensitivity of evapotranspiration models to errors in model ...
African Journals Online (AJOL)
Dr Obe
ABSTRACT. Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –. Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was ...
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Modeling measurement error in tumor characterization studies
Directory of Open Access Journals (Sweden)
Marjoram Paul
2011-07-01
Full Text Available Abstract Background Etiologic studies of cancer increasingly use molecular features such as gene expression, DNA methylation and sequence mutation to subclassify the cancer type. In large population-based studies, the tumor tissues available for study are archival specimens that provide variable amounts of amplifiable DNA for molecular analysis. As molecular features measured from small amounts of tumor DNA are inherently noisy, we propose a novel approach to improve statistical efficiency when comparing groups of samples. We illustrate the phenomenon using the MethyLight technology, applying our proposed analysis to compare MLH1 DNA methylation levels in males and females studied in the Colon Cancer Family Registry. Results We introduce two methods for computing empirical weights to model heteroscedasticity that is caused by sampling variable quantities of DNA for molecular analysis. In a simulation study, we show that using these weights in a linear regression model is more powerful for identifying differentially methylated loci than standard regression analysis. The increase in power depends on the underlying relationship between variation in outcome measure and input DNA quantity in the study samples. Conclusions Tumor characteristics measured from small amounts of tumor DNA are inherently noisy. We propose a statistical analysis that accounts for the measurement error due to sampling variation of the molecular feature and show how it can improve the power to detect differential characteristics between patient groups.
PRESAGE: Protecting Structured Address Generation against Soft Errors
Energy Technology Data Exchange (ETDEWEB)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
2016-12-28
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.
Directory of Open Access Journals (Sweden)
Pooyan Vahidi Pashsaki
2016-06-01
Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.
Prediction Errors of Molecular Machine Learning Models Lower than Hybrid DFT Error.
Faber, Felix A; Hutchison, Luke; Huang, Bing; Gilmer, Justin; Schoenholz, Samuel S; Dahl, George E; Vinyals, Oriol; Kearnes, Steven; Riley, Patrick F; von Lilienfeld, O Anatole
2017-11-14
We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to ∼118k distinct molecules. Molecular structures and properties at the hybrid density functional theory (DFT) level of theory come from the QM9 database [ Ramakrishnan et al. Sci. Data 2014 , 1 , 140022 ] and include enthalpies and free energies of atomization, HOMO/LUMO energies and gap, dipole moment, polarizability, zero point vibrational energy, heat capacity, and the highest fundamental vibrational frequency. Various molecular representations have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR), and two types of neural networks, graph convolutions (GC) and gated graph networks (GG). Out-of sample errors are strongly dependent on the choice of representation and regressor and molecular property. Electronic properties are typically best accounted for by MG and GC, while energetic properties are better described by HDAD and KRR. The specific combinations with the lowest out-of-sample errors in the ∼118k training set size limit are (free) energies and enthalpies of atomization (HDAD/KRR), HOMO/LUMO eigenvalue and gap (MG/GC), dipole moment (MG/GC), static polarizability (MG/GG), zero point vibrational energy (HDAD/KRR), heat capacity at room temperature (HDAD/KRR), and highest fundamental vibrational frequency (BAML/RF). We present numerical
Quasi-eccentricity error modeling and compensation in vision metrology
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
Modelling the basic error tendencies of human operators
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in total, simulate the general character of operator performance. (author)
Modelling the basic error tendencies of human operators
International Nuclear Information System (INIS)
Reason, James
1988-01-01
The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance. (author)
WORKING MEMORY STRUCTURE REVEALED IN ANALYSIS OF RECALL ERRORS
Directory of Open Access Journals (Sweden)
Regina V Ershova
2017-12-01
Full Text Available We analyzed working memory errors stemming from 193 Russian college students taking the Tarnow Unchunkable Test utilizing double digit items on a visual display.In three-item trials with at most one error per trial, single incorrect tens and ones digits (“singlets” were overrepresented and made up the majority of errors, indicating a base 10 organization.These errors indicate that there are separate memory maps for each position and that there are pointers that can move primarily within these maps. Several pointers make up a pointer collection. The number of pointer collections possible is the working memory capacity limit. A model for self-organizing maps is constructed in which the organization is created by turning common pointer collections into maps thereby replacing a pointer collection with a single pointer.The factors 5 and 11 were underrepresented in the errors, presumably because base 10 properties beyond positional order were used for error correction, perhaps reflecting the existence of additional maps of integers divisible by 5 and integers divisible by 11.
Quality assurance and human error effects on the structural safety
International Nuclear Information System (INIS)
Bertero, R.; Lopez, R.; Sarrate, M.
1991-01-01
Statistical surveys show that the frequency of failure of structures is much larger than that expected by the codes. Evidence exists that human errors (especially during the design process) is the main cause for the difference between the failure probability admitted by codes and the reality. In this paper, the attenuation of human error effects using tools of quality assurance is analyzed. In particular, the importance of the independent design review is highlighted, and different approaches are discussed. The experience from the Atucha II project, as well as the USA and German practice on independent design review, are summarized. (Author)
Measurement Error in Designed Experiments for Second Order Models
McMahan, Angela Renee
1997-01-01
Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest ...
On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models
Energy Technology Data Exchange (ETDEWEB)
Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.
2014-02-01
The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.
Bayesian Total Error Analysis - An Error Sensitive Approach to Model Calibration
Franks, S. W.; Kavetski, D.; Kuczera, G.
2002-12-01
The majority of environmental models require calibration of their parameters before meaningful predictions of catchment behaviour can be made. Despite the importance of reliable parameter estimates, there are growing concerns about the ability of objective-based inference methods to adequately calibrate environmental models. The problem lies with the formulation of the objective or likelihood function, which is currently implemented using essentially ad-hoc methods. We outline limitations of current calibration methodologies and introduce a more systematic Bayesian Total Error Analysis (BATEA) framework for environmental model calibration and validation, which imposes a hitherto missing rigour in environmental modelling by requiring the specification of physically realistic model and data uncertainty models with explicit assumptions that can and must be tested against available evidence. The BATEA formalism enables inference of the hydrological parameters and also of any latent variables of the uncertainty models, e.g., precipitation depth errors. The latter could be useful for improving data sampling and measurement methodologies. In addition, distinguishing between the various sources of errors will reduce the current ambiguity about parameter and predictive uncertainty and enable rational testing of environmental models' hypotheses. Monte Carlo Markov Chain methods are employed to manage the increased computational requirements of BATEA. A case study using synthetic data demonstrates that explicitly accounting for forcing errors leads to immediate advantages over traditional regression (e.g., standard least squares calibration) that ignore rainfall history corruption and pseudo-likelihood methods (e.g., GLUE) do not explicitly characterise data and model errors. It is precisely data and model errors that are responsible for the need for calibration in the first place; we expect that understanding these errors will force fundamental shifts in the model
Structural damage detection robust against time synchronization errors
International Nuclear Information System (INIS)
Yan, Guirong; Dyke, Shirley J
2010-01-01
Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab
Study of error modeling in kinematic calibration of parallel manipulators
Directory of Open Access Journals (Sweden)
Liping Wang
2016-10-01
Full Text Available Error modeling is the foundation of a kinematic calibration which is a main approach to assure the accuracy of parallel manipulators. This article investigates the influence of error model on the kinematic calibration of parallel manipulators. Based on the coupling analysis between error parameters, an identifiability index for evaluating the error model is proposed. Taking a 3PRS parallel manipulator as an example, three error models with different values of identifiability index are given. With the same parameter identification, measurement, and compensation method, the computer simulations and prototype experiments of the kinematic calibration with each error model are performed. The simulation and experiment results show that the kinematic calibration using the error model with a bigger value of identifiability index can lead to a better accuracy of the manipulator. Then, an approach of error modeling is proposed to obtain a bigger value of identifiability index. The study of this article is useful for error modeling in kinematic calibration of other parallel manipulators.
Error modeling for surrogates of dynamical systems using machine learning
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-12-01
A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.
Interferometric GPS Attitude: A Stochastic Error Model
1993-02-01
i’attitude SPG ainsi que les erreurs d’attitude observdes durant ressai en mer. II caracterise ces erreurs come 6tant des procedures stochastiques. ..-’II...the multipath error, which changes with satellite geometry. The satellites, in 12 hour orbits , move through about 33 degrees in 4,000 seconds, which
Dual Numbers Approach in Multiaxis Machines Error Modeling
Directory of Open Access Journals (Sweden)
Jaroslav Hrdina
2014-01-01
Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.
Use of the breeding technique to estimate the structure of the analysis 'errors of the day'
Directory of Open Access Journals (Sweden)
M. Corazza
2003-01-01
Full Text Available A 3D-variational data assimilation scheme for a quasi-geostrophic channel model (Morss, 1998 is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. Case studies using different observational densities are considered to compare the evolution of the Bred Vectors to the spatial structure of the background error. In addition, the bred vector dimension (BV-dimension, defined by Patil et al. (2001 is applied to the bred vectors. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms (enstrophy and streamfunction considered in this paper. When 10 surrogate bred vectors (corresponding to different days from that of the background error are used to describe the local patterns of the background error, the explained variance is quite high, about 85-88%, indicating that the statistical average properties of the bred vectors represent well those of the background error. However, a subspace of 10 bred vectors corresponding to the time of the background error increased the percentage of explained variance to 96-98%, with the largest percentage when the background errors are large. These results suggest that a statistical basis of bred vectors collected over time can be used to create an effective constant background error covariance for data assimilation with 3D-Var. Including the "errors of the day" through the use of bred vectors corresponding to the background forecast time can bring an additional significant improvement.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling
Optical linear algebra processors - Noise and error-source modeling
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.
Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen
2017-04-01
We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Managing errors in radiology: a working model
International Nuclear Information System (INIS)
Melvin, C.; Bodley, R.; Booth, A.; Meagher, T.; Record, C.; Savage, P.
2004-01-01
AIM: To develop a practical mechanism for reviewing reporting discrepancies as addressed in the Royal College of Radiologists publication 'To err is human. The case for review of reporting discrepancies'. MATERIALS AND METHODS: A regular meeting was developed, and has evolved, within the department to review discrepancies. Standard forms were devised for submission of cases as well as recording and classification of discrepancies. This has resulted in availability of figures that can be audited annually. RESULTS: Eighty-one cases involving error were reviewed over a 12-month period. Seven further cases flagged as discrepancies were not identified on peer review. Twenty-four reports were amended subsequent to the meeting. Nineteen additional cases were brought to the meeting as illustrative of teaching points or for discussion. CONCLUSION: We have evolved a successful process of reviewing reporting errors, which enjoys the confidence and support of all clinical radiologists, and is perceived as a method of improving patient care through an increasing awareness of lapses in performance
The error model and experiment of measuring angular position error based on laser collimation
Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo
2018-01-01
Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Empirical study of the GARCH model with rational errors
International Nuclear Information System (INIS)
Chen, Ting Ting; Takaishi, Tetsuya
2013-01-01
We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions
Deconvolution Estimation in Measurement Error Models: The R Package decon
Wang, Xiao-Feng; Wang, Bin
2011-01-01
Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139
Bayesian approach to errors-in-variables in regression models
Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad
2017-05-01
In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.
Learning (from) the errors of a systems biology model.
Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik
2016-02-11
Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Error and Uncertainty Analysis for Ecological Modeling and Simulation
National Research Council Canada - National Science Library
Gertner, George
1998-01-01
The main objectives of this project are a) to develop a general methodology for conducting sensitivity and uncertainty analysis and building error budgets in simulation modeling over space and time; and b...
Efficiency in Linear Model with AR (1) and Correlated Error ...
African Journals Online (AJOL)
Nekky Umera
Assumptions in the classical normal linear regression model include that of lack of autocorrelation of the error terms ... which the classical linear regression model is based will usually be violated. These violations, seen in widespread .... we conclude in section 5. The Model. We assume a simple linear regression model:.
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliabil-ity of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity mod-els have been explored...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model fea-tures relevant to sea level modeling...
Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik
2015-01-01
The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…
Bayesian modeling of measurement error in predictor variables
Fox, Gerardus J.A.; Glas, Cornelis A.W.
2003-01-01
It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between
Multiplicity Control in Structural Equation Modeling
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments and the rege......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...
Error modeling and tolerance design of a parallel manipulator with full-circle rotation
Directory of Open Access Journals (Sweden)
Yanbing Ni
2016-05-01
Full Text Available A method for improving the accuracy of a parallel manipulator with full-circle rotation is systematically investigated in this work via kinematic analysis, error modeling, sensitivity analysis, and tolerance allocation. First, a kinematic analysis of the mechanism is made using the space vector chain method. Using the results as a basis, an error model is formulated considering the main error sources. Position and orientation error-mapping models are established by mathematical transformation of the parallelogram structure characteristics. Second, a sensitivity analysis is performed on the geometric error sources. A global sensitivity evaluation index is proposed to evaluate the contribution of the geometric errors to the accuracy of the end-effector. The analysis results provide a theoretical basis for the allocation of tolerances to the parts of the mechanical design. Finally, based on the results of the sensitivity analysis, the design of the tolerances can be solved as a nonlinearly constrained optimization problem. A genetic algorithm is applied to carry out the allocation of the manufacturing tolerances of the parts. Accordingly, the tolerance ranges for nine kinds of geometrical error sources are obtained. The achievements made in this work can also be applied to other similar parallel mechanisms with full-circle rotation to improve error modeling and design accuracy.
Itakura, Kota; Hatakeyama, Go; Akiyoshi, Masanori; Komoda, Norihisa
Recently, there are various proposals on tool for multi-agent simulation. However, in such simulation tools, analysts who do not have programming skill spend a lot of time to develop programs because notation of simulation models is not defined sufficiently and programming language is varied on tools. To solve this problem, a programming environment that defines the notation of simulation model has poposed. In this environment, analysts can design simulation with a graph representation and get the program code without writing programs. However, it is difficult to find errors that cause unintended behavior in simulation. Therefore, we propose a support method as a model debugger which helps users to find errors. The debugger generates candidates of errors, using a user's report of unintended behavior based on “typical report patterns”. Candidates of errors are extracted from “tree structure of error-inducing factors” that consists of source patterns of errors. In this paper, we executed experiments that compare time needed for examinees to find errors. Experimental results show the time to find errors by utilizing our model debugger is shortened.
Hacker, Joshua; Angevine, Wayne
2013-04-01
Experiments with the single-column implementation of the Weather Research and Forecasting mesoscale model provide a basis for deducing land-atmosphere coupling errors in the model. Coupling occurs both through heat and moisture fluxes through the land-atmosphere interface and roughness sub-layer, and turbulent heat, moisture, and momentum fluxes through the atmospheric surface layer. This work primarily addresses the turbulent fluxes, which are parameterized following Monin-Obukhov similarity theory applied to the atmospheric surface layer. By combining ensemble data assimilation and parameter estimation, the model error can be characterized. Ensemble data assimilation of 2-m temperature and water vapor mixing ratio, and 10-m wind components, forces the model to follow observations during a month-long simulation for a column over the well-instrumented ARM Central Facility near Lamont, OK. One-hour errors in predicted observations are systematically small but non-zero, and the systematic errors measure bias as a function of local time of day. Analysis increments for state elements nearby (15-m AGL) can be too small or have the wrong sign, indicating systematically biased covariances and model error. Experiments using the ensemble filter to objectively estimate a parameter controlling the thermal land-atmosphere coupling show that the parameter adapts to offset the model errors, but that the errors cannot be eliminated. Results suggest either structural error or further parametric error that may be difficult to estimate. Experiments omitting atypical observations such as soil and flux measurements lead to qualitatively similar deductions, showing potential for assimilating common in-situ observations as an inexpensive framework for deducing and isolating model errors. We finish by presenting recent results from a deeper examination of the second-moment ensemble statistics, which demonstrate the effect of assimilation on the coupling through the stability function in
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
The Sensitivity of Evapotranspiration Models to Errors in Model ...
African Journals Online (AJOL)
Three levels of sensitivity, herein termed sensitivity, ratings, were established, namely: Highly Sensitive (Rating:1); Moderately sensitive' (Rating:2); and 'not too sensitive'(Rating: 3). The ratings were based on the amount of error in the measured parameter to introduce + 10% relative error in the predicted Et. The level of ...
GMM estimation in panel data models with measurement error
Wansbeek, T.J.
Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.
How well can we forecast future model error and uncertainty by mining past model performance data
Solomatine, Dimitri
2016-04-01
) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) the UNEEC method [2,3,7] which takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (e.g. neural networks or k-NN method) (c) the recent DUBRAE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals which first corrects the model residual and then employs an autoregressive statistical model for uncertainty prediction) [5] 2. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. For real complex non-linear functions (models) implemented in software various versions of the Monte Carlo simulation are used: values of parameters or inputs are sampled from the assumed distributions and the model is run multiple times to generate multiple outputs. The data generated by Monte Carlo analysis can be used to build a machine learning model which will be able to make predictions of model uncertainty for the future his method is named MLUE (Machine Learning for Uncertainty Estimation) and is covered in [4,6]. 3. Structural uncertainty stemming from inadequate model structure. The paper discusses the possibilities and experiences of building the models able to forecast (rather than analyse) residual and parametric uncertainty of hydrological models. References [1] Koenker, R., and G. Bassett (1978). Regression quantiles. Econometrica, 46(1), 33- 50, doi:10.2307/1913643. [2] D.L. Shrestha, D.P. Solomatine (2006). Machine learning approaches for estimation of prediction interval for the model output. Neural Networks J., 19(2), 225-235. [3] D.P. Solomatine, D.L. Shrestha (2009). A novel method to estimate model uncertainty using machine learning techniques. Water Resources Res. 45, W00B11. [4] D. L
Three-Phase Text Error Correction Model for Korean SMS Messages
Byun, Jeunghyun; Park, So-Young; Lee, Seung-Wook; Rim, Hae-Chang
In this paper, we propose a three-phase text error correction model consisting of a word spacing error correction phase, a syllable-based spelling error correction phase, and a word-based spelling error correction phase. In order to reduce the text error correction complexity, the proposed model corrects text errors step by step. With the aim of correcting word spacing errors, spelling errors, and mixed errors in SMS messages, the proposed model tries to separately manage the word spacing error correction phase and the spelling error correction phase. For the purpose of utilizing both the syllable-based approach covering various errors and the word-based approach correcting some specific errors accurately, the proposed model subdivides the spelling error correction phase into the syllable-based phase and the word-based phase. Experimental results show that the proposed model can improve the performance by solving the text error correction problem based on the divide-and-conquer strategy.
Prediction error, ketamine and psychosis: An updated model.
Corlett, Philip R; Honey, Garry D; Fletcher, Paul C
2016-11-01
In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.
Probabilistic modeling of systematic errors in two-hybrid experiments.
Sontag, David; Singh, Rohit; Berger, Bonnie
2007-01-01
We describe a novel probabilistic approach to estimating errors in two-hybrid (2H) experiments. Such experiments are frequently used to elucidate protein-protein interaction networks in a high-throughput fashion; however, a significant challenge with these is their relatively high error rate, specifically, a high false-positive rate. We describe a comprehensive error model for 2H data, accounting for both random and systematic errors. The latter arise from limitations of the 2H experimental protocol: in theory, the reporting mechanism of a 2H experiment should be activated if and only if the two proteins being tested truly interact; in practice, even in the absence of a true interaction, it may be activated by some proteins - either by themselves or through promiscuous interaction with other proteins. We describe a probabilistic relational model that explicitly models the above phenomenon and use Markov Chain Monte Carlo (MCMC) algorithms to compute both the probability of an observed 2H interaction being true as well as the probability of individual proteins being self-activating/promiscuous. This is the first approach that explicitly models systematic errors in protein-protein interaction data; in contrast, previous work on this topic has modeled errors as being independent and random. By explicitly modeling the sources of noise in 2H systems, we find that we are better able to make use of the available experimental data. In comparison with Bader et al.'s method for estimating confidence in 2H predicted interactions, the proposed method performed 5-10% better overall, and in particular regimes improved prediction accuracy by as much as 76%. http://theory.csail.mit.edu/probmod2H
A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-01-01
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage and temperature as the input variables and angular velocity error as the output variable. Firstly, the angular ve...
Identification of simultaneous equation models with measurement error : a computerized evaluation
Merckens, Arjen; Bekker, Paul
1993-01-01
Rank conditions for identification in structural models are often difficult evaluate. Here we consider simultaneous equation models with measurement error and we show that previously published rank conditions for identification are not well-suited for evaluation. An alternative rank condition is
Bayesian analysis of data and model error in rainfall-runoff hydrological models
Kavetski, D.; Franks, S. W.; Kuczera, G.
2004-12-01
A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing for linea...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis......In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...
Consistent estimation of linear panel data models with measurement error
Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas
2017-01-01
Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the
Identification of linear error-models with projected dynamical systems
Czech Academy of Sciences Publication Activity Database
Krejčí, Pavel; Kuhnen, K.
2004-01-01
Roč. 10, č. 1 (2004), s. 59-91 ISSN 1387-3954 Keywords : identification * error models * projected dynamical systems Subject RIV: BA - General Mathematics Impact factor: 0.292, year: 2004 http://www.informaworld.com/smpp/content~db=all~content=a713682517
Testing for spatial error dependence in probit models
Amaral, P. V.; Anselin, L.; Arribas-Bel, D.
2013-01-01
In this note, we compare three test statistics that have been suggested to assess the presence of spatial error autocorrelation in probit models. We highlight the differences between the tests proposed by Pinkse and Slade (J Econom 85(1):125-254, 1998), Pinkse (Asymptotics of the Moran test and a
Bayesian network models for error detection in radiotherapy plans
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Semiparametric analysis of linear transformation models with covariate measurement errors.
Sinha, Samiran; Ma, Yanyuan
2014-03-01
We take a semiparametric approach in fitting a linear transformation model to a right censored data when predictive variables are subject to measurement errors. We construct consistent estimating equations when repeated measurements of a surrogate of the unobserved true predictor are available. The proposed approach applies under minimal assumptions on the distributions of the true covariate or the measurement errors. We derive the asymptotic properties of the estimator and illustrate the characteristics of the estimator in finite sample performance via simulation studies. We apply the method to analyze an AIDS clinical trial data set that motivated the work. © 2013, The International Biometric Society.
A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling
Directory of Open Access Journals (Sweden)
Yanshun Zhang
2015-02-01
Full Text Available With the open-loop fiber optic gyro (OFOG model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage and temperature as the input variables and angular velocity error as the output variable. Firstly, the angular velocity error is extracted from OFOG output signals, and then the output voltage , temperature and angular velocity error are used as the learning samples to train a Radial-Basis-Function (RBF neural network model. Then the nonlinear mapping model over T, and is established and thus can be calculated automatically to compensate OFOG errors according to and . The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by , and relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by , and , respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.
A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-02-27
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.
An Emprical Point Error Model for Tls Derived Point Clouds
Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin
2016-06-01
The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
DEFF Research Database (Denmark)
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using this mo...
Assessing Numerical Error in Structural Dynamics Using Energy Balance
Directory of Open Access Journals (Sweden)
Rabindranath Andujar
2013-01-01
Full Text Available This work applies the variational principles of Lagrange and Hamilton to the assessment of numerical methods of linear structural analysis. Different numerical methods are used to simulate the behaviour of three structural configurations and benchmarked in their computation of the Lagrangian action integral over time. According to the principle of energy conservation, the difference at each time step between the kinetic and the strain energies must equal the work done by the external forces. By computing this difference, the degree of accuracy of each combination of numerical methods can be assessed. Moreover, it is often difficult to perceive numerical instabilities due to the inherent complexities of the modelled structures. By means of the proposed procedure, these complexities can be globally controlled and visualized in a straightforward way. The paper presents the variational principles to be considered for the collection and computation of the energy-related parameters (kinetic, strain, dissipative, and external work. It then introduces a systematic framework within which the numerical methods can be compared in a qualitative as well as in a quantitative manner. Finally, a series of numerical experiments is conducted using three simple 2D models subjected to the effect of four different dynamic loadings.
Error analysis of short term wind power prediction models
International Nuclear Information System (INIS)
De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco
2011-01-01
The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
2013-01-01
We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Unde...... versions that are simple to compute. A simulation study shows that the finite-sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full......We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...
Structural DMR: A Technique for Implementation of Soft Error Tolerant FIR Filters
Reviriego, P.; Bleakley, Chris J.; Maestro, J.A.
2011-01-01
In this brief, an efficient technique for implementation of soft-error-tolerant finite impulse response (FIR) filters is presented. The proposed technique uses two implementations of the basic filter with different structures operating in parallel. A soft error occurring in either filter causes the outputs of the filters to differ, or mismatch, for at least one sample. The filters are specifically designed so that, when a soft error occurs, they produce distinct error patterns at the filter o...
Some aspects of statistical modeling of human-error probability
International Nuclear Information System (INIS)
Prairie, R.R.
1982-01-01
Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element
Approximate Minimization of the Regularized Expected Error over Kernel Models
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2008-01-01
Roč. 33, č. 3 (2008), s. 747-756 ISSN 0364-765X R&D Projects: GA ČR GA201/05/0557; GA ČR GA201/08/1744 Institutional research plan: CEZ:AV0Z10300504 Keywords : suboptimal solutions * expected error * convex functionals * kernel methods * model complexity * rates of convergence Subject RIV: BA - General Mathematics Impact factor: 1.086, year: 2008
Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long
2001-01-01
This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
International Nuclear Information System (INIS)
Fruehwirth, R.
1993-01-01
We present an estimation procedure of the error components in a linear regression model with multiple independent stochastic error contributions. After solving the general problem we apply the results to the estimation of the actual trajectory in track fitting with multiple scattering. (orig.)
Topological quantum error correction in the Kitaev honeycomb model
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Effect of unrepresented model errors on estimated soil hydraulic material properties
Directory of Open Access Journals (Sweden)
S. Jaumann
2017-09-01
Full Text Available Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS was forced with a fluctuating groundwater table while time domain reflectometry (TDR and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i sensor position uncertainty, (ii small scale-heterogeneity, and (iii 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii The 1-D study yields biased parameters due to unrepresented lateral flow. (iii Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.
Effect of unrepresented model errors on estimated soil hydraulic material properties
Jaumann, Stefan; Roth, Kurt
2017-09-01
Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS) was forced with a fluctuating groundwater table while time domain reflectometry (TDR) and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i) sensor position uncertainty, (ii) small scale-heterogeneity, and (iii) 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i) the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii) The 1-D study yields biased parameters due to unrepresented lateral flow. (iii) Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.
Functional multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Zoh, Roger S; Bazer, Fuller W; Wu, Guoyao; Carroll, Raymond J
2017-05-08
Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body. Therefore, metabolic rate can be approximated by heat production plus some errors. While energy expenditure and metabolic rates are correlated, they are not equivalent. Energy expenditure results from physical function, while metabolism can occur within the body without the occurrence of physical activities. In this manuscript, we present a novel approach for studying the relationship between metabolic rate and indicators of energy expenditure. We do so by extending our previous work on MIMIC ME models to allow responses that are sparsely observed functional data, defining the sparse functional multiple indicators, multiple cause measurement error (FMIMIC ME) models. The mean curves in our proposed methodology are modeled using basis splines. A novel approach for estimating the variance of the classical measurement error based on functional principal components is presented. The model parameters are estimated using the EM algorithm and a discussion of the model's identifiability is provided. We show that the defined model is not a trivial extension of longitudinal or functional data methods, due to the presence of the latent construct. Results from its application to data collected on Zucker diabetic fatty rats are provided. Simulation results investigating the properties of our approach are also presented. © 2017, The International Biometric Society.
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
DEFF Research Database (Denmark)
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2012-01-01
This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....
Error apportionment for atmospheric chemistry-transport models – a new approach to model evaluation
Directory of Open Access Journals (Sweden)
E. Solazzo
2016-05-01
Full Text Available In this study, methods are proposed to diagnose the causes of errors in air quality (AQ modelling systems. We investigate the deviation between modelled and observed time series of surface ozone through a revised formulation for breaking down the mean square error (MSE into bias, variance and the minimum achievable MSE (mMSE. The bias measures the accuracy and implies the existence of systematic errors and poor representation of data complexity, the variance measures the precision and provides an estimate of the variability of the modelling results in relation to the observed data, and the mMSE reflects unsystematic errors and provides a measure of the associativity between the modelled and the observed fields through the correlation coefficient. Each of the error components is analysed independently and apportioned to resolved processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day and as a function of model complexity.The apportionment of the error is applied to the AQMEII (Air Quality Model Evaluation International Initiative group of models, which embrace the majority of regional AQ modelling systems currently used in Europe and North America.The proposed technique has proven to be a compact estimator of the operational metrics commonly used for model evaluation (bias, variance, and correlation coefficient, and has the further benefit of apportioning the error to the originating timescale, thus allowing for a clearer diagnosis of the processes that caused the error.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
Wild, Oliver; Prather, Michael J.
2006-06-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.
A Systems Modeling Approach for Risk Management of Command File Errors
Meshkat, Leila
2012-01-01
The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.
Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.
2017-12-01
Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of
Semiparametric modeling: Correcting low-dimensional model error in parametric models
International Nuclear Information System (INIS)
Berry, Tyrus; Harlim, John
2016-01-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM
Directory of Open Access Journals (Sweden)
David Kaluge
2017-03-01
Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....
Correction of thickness measurement errors for two adjacent sheet structures in MR images
International Nuclear Information System (INIS)
Cheng Yuanzhi; Wang Shuguo; Sato, Yoshinobu; Nishii, Takashi; Tamura, Shinichi
2007-01-01
We present a new method for measuring the thickness of two adjacent sheet structures in MR images. In the hip joint, in which the femoral and acetabular cartilages are adjacent to each other, a conventional measurement technique based on the second derivative zero crossings (called the zero-crossings method) can introduce large underestimation errors in measurements of cartilage thickness. In this study, we have developed a model-based approach for accurate thickness measurement. We model the imaging process for two adjacent sheet structures, which simulate the two articular cartilages in the hip joint. This model can be used to predict the shape of the intensity profile along the sheet normal orientation. Using an optimization technique, the model parameters are adjusted to minimize the differences between the predicted intensity profile and the actual intensity profiles observed in the MR data. The set of model parameters that minimize the difference between the model and the MR data yield the thickness estimation. Using three phantoms and one normal cadaveric specimen, the usefulness of the new model-based method is demonstrated by comparing the model-based results with the results generated using the zero-crossings method. (author)
Regularized Structural Equation Modeling
Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.
2016-01-01
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019
Directory of Open Access Journals (Sweden)
Akhsyim Afandi
2017-03-01
Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.
Uncertainty and error in complex plasma chemistry models
Turner, Miles M.
2015-06-01
Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
To Err Is Human; To Structurally Prime from Errors Is Also Human
Slevc, L. Robert; Ferreira, Victor S.
2013-01-01
Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…
International Nuclear Information System (INIS)
Carl Stern; Martin Lee
1999-01-01
Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models
Brassington, Gary
2017-04-01
The mean absolute error (MAE) and root mean square error (RMSE) are two metrics that are often used interchangeably as measures of ocean forecast accuracy. Recent literature has debated which of these should be preferred though their conclusions have largely been based on empirical arguments. We note that in general, RM SE2 = M AE2 + V ARk [|ɛ|] PIC PIC such that RMSE includes both the MAE as well as additional information related to the variance (biased estimator) of the errors ɛ with sample size k. The greater sensitivity of RMSE to a small number of outliers is directly attributable to the variance of absolute error. Further statistical properties for both metrics are derived and compared based on the assumption that the errors are Gaussian. For an unbiased (or bias corrected) model both MAE and RMSE are shown to estimate the total error standard deviation to within a constant coefficient such that ° -- M AE ≈ 2/πRM SE PIC . Both metrics have comparable behaviour in response to model bias and asymptote to the model bias as the bias increases. MAE is shown to be an unbiased estimator while RMSE is a biased estimator. MAE also has a lower sample variance compared with RMSE indicating MAE is the most robust choice. For real-time applications where there is a likelihood of "bad" observations we recommend ° - ° ---° - π- -1- π- π- TESD = 2 M AE ± √k- 2 - 1 2M AE PIC as an unbiased estimator of the total error standard deviation with error estimates (one standard deviation) based on the sample variance and defined as a scaling of the MAE itself. A sample size (k) on the order of 90 and 9000 provides an error scaling of 10% and 1% respectively. Nonetheless if the model performance is being analysed using a large sample of delayed-mode quality controlled observations then RMSE might be preferred where the second moment sensitivity to large model errors is important. Alternatively for model intercomparisons the information might compactly represented by a
A new stochastic model considering satellite clock interpolation errors in precise point positioning
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
Peak-counts blood flow model-errors and limitations
International Nuclear Information System (INIS)
Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.
1984-01-01
The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions
Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining.
Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek; Jaskolski, Mariusz; Rupp, Bernhard
2014-05-01
Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available), have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely necessary safeguarding
Dynamic time warping in phoneme modeling for fast pronunciation error detection.
Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal
2016-02-01
The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scipal, K.; Holmes, T.; de Jeu, R.; Naeimi, V.; Wagner, W.
2008-12-01
In the last few years, research made significant progress towards operational soil moisture remote sensing which lead to the availability of several global data sets. For an optimal use of these data, an accurate estimation of the error structure is an important condition. To solve for the validation problem we introduce the triple collocation error estimation technique. The triple collocation technique is a powerful tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three independent data sources. We evaluate the method by applying it to a passive microwave (TRMM radiometer) derived, an active microwave (ERS-2 scatterometer) derived and a modeled (ERA-Interim reanalysis) soil moisture data sets. The results suggest that the method provides realistic error estimates.
Structured methods for identifying and correcting potential human errors in aviation operations
Energy Technology Data Exchange (ETDEWEB)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risks of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).
International Nuclear Information System (INIS)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-01-01
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well
Energy Technology Data Exchange (ETDEWEB)
Korn, E L
1978-08-01
This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.
Regularized multivariate regression models with skew-t error distributions
Chen, Lianfu
2014-06-01
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.
Error modelling of quantum Hall array resistance standards
Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa
2018-04-01
Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.
Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.
2013-01-01
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise,
Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors
Marti, Alejandro; Folch, Arnau
2018-03-01
Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally
Structural Damage Detection Using Frequency Domain Error Localization.
1994-12-01
113 rn ~l-,I T X ~ oy Ul C 114 APPENDIX D. FE MODEL / COMPUTER CODES The following is a brief description of MATLAB routines employed in this thesis...Gordis, Code ME/GO 3 Department of Mechanical Engineering Naval Postgraduate School Monterey, California 93943 4. Professor Y. Kwon , Code ME/KW 2
Hand-eye calibration using a target registration error model.
Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M
2017-10-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
Local concurrent error detection and correction in data structures using virtual backpointers
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
Modeling Fluid Structure Interaction
National Research Council Canada - National Science Library
Benaroya, Haym
2000-01-01
The principal goal of this program is on integrating experiments with analytical modeling to develop physics-based reduced-order analytical models of nonlinear fluid-structure interactions in articulated naval platforms...
Hebbian errors in learning: an analysis using the Oja model.
Rădulescu, Anca; Cox, Kingsley; Adams, Paul
2009-06-21
Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution. We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.
Generalized multiplicative error models: Asymptotic inference and empirical analysis
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.
2011-07-01
The current study evaluates the impacts of various sources of uncertainty involved in hydrologic modeling on parameter behavior and regionalization utilizing different Bayesian likelihood functions and the Differential Evolution Adaptive Metropolis (DREAM) algorithm. The developed likelihood functions differ in their underlying assumptions and treatment of error sources. We apply the developed method to a snow accumulation and ablation model (National Weather Service SNOW17) and generate parameter ensembles to predict snow water equivalent (SWE). Observational data include precipitation and air temperature forcing along with SWE measurements from 24 sites with diverse hydroclimatic characteristics. A multiple linear regression model is used to construct regionalization relationships between model parameters and site characteristics. Results indicate that model structural uncertainty has the largest influence on SNOW17 parameter behavior. Precipitation uncertainty is the second largest source of uncertainty, showing greater impact at wetter sites. Measurement uncertainty in SWE tends to have little impact on the final model parameters and resulting SWE predictions. Considering all sources of uncertainty, parameters related to air temperature and snowfall fraction exhibit the strongest correlations to site characteristics. Parameters related to the length of the melting period also show high correlation to site characteristics. Finally, model structural uncertainty and precipitation uncertainty dramatically alter parameter regionalization relationships in comparison to cases where only uncertainty in model parameters or output measurements is considered. Our results demonstrate that accurate treatment of forcing, parameter, model structural, and calibration data errors is critical for deriving robust regionalization relationships.
Selecting Human Error Types for Cognitive Modelling and Simulation
Mioch, T.; Osterloh, J.P.; Javaux, D.
2010-01-01
This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Learning from Errors: A Model of Individual Processes
Tulis, Maria; Steuer, Gabriele; Dresel, Markus
2016-01-01
Errors bear the potential to improve knowledge acquisition, provided that learners are able to deal with them in an adaptive and reflexive manner. However, learners experience a host of different--often impeding or maladaptive--emotional and motivational states in the face of academic errors. Research has made few attempts to develop a theory that…
Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.
2013-10-01
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of
Directory of Open Access Journals (Sweden)
R. Locatelli
2013-10-01
Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly
Koepke, C.; Irving, J.; Roubinet, D.
2014-12-01
Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through
The Role of Human Error in Design, Construction, and Reliability of Marine Structures.
1994-10-01
WELDING RESEARCH COUNCIL Dr. Ramswar Bhattacharyya Dr. Martin Prager CANADA CENTRE FOR MINERALS AND AMERICAN IRON AND STEEL INSTITUTE ENERGY...construction of ship structures. Figure 1.2 - Human errors * Can HOE be quantified ? Yes, if and as desirable, HOE can be Organizion Error Clasification ...could be quality control problems such as excessive misalignments or use of lower grade steel that would result in systematically lowering the
Students’ errors in solving combinatorics problems observed from the characteristics of RME modeling
Meika, I.; Suryadi, D.; Darhim
2018-01-01
This article was written based on the learning evaluation results of students’ errors in solving combinatorics problems observed from the characteristics of Realistic Mathematics Education (RME); that is modeling. Descriptive method was employed by involving 55 students from two international-based pilot state senior high schools in Banten. The findings of the study suggested that the students still committed errors in simplifying the problem as much 46%; errors in making mathematical model (horizontal mathematization) as much 60%; errors in finishing mathematical model (vertical mathematization) as much 65%; and errors in interpretation as well as validation as much 66%.
Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.
2013-01-01
The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.
Application of grey theory in identification model of human error criticality
International Nuclear Information System (INIS)
Li Pengcheng; Zhang Li; Wang Yiqun
2009-01-01
The identification model for human error criticality is constructed on the basis of the principle of the Failure Mode and Effects Analysis. It consists of three decision-making factors, namely the scale of probability of occurrence of human error mode, the scale of probability of error-effect and the scale of error-consequence criticality. It is difficult to consider the weight of every factor, this paper employs the grey theory to identify the human error criticality, which provides a new viewpoint for the identification of the priority of error criticality and origin, which overcome the problem for the actual weight distribution. (authors)
Modeling of alpha-particle-induced soft error rate in DRAM
International Nuclear Information System (INIS)
Shin, H.
1999-01-01
Alpha-particle-induced soft error in 256M DRAM was numerically investigated. A unified model for alpha-particle-induced charge collection and a soft-error-rate simulator (SERS) was developed. The author investigated the soft error rate of 256M DRAM and identified the bit-bar mode as one of dominant modes for soft error. In addition, for the first time, it was found that trench-oxide depth has a significant influence on soft error rate, and it should be determined by the tradeoff between soft error rate and cell-to-cell isolation characteristics
Dreano, Denis
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one......, such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...... due to GIA. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA....
Bayesian modeling of measurement error in predictor variables using item response theory
Fox, Gerardus J.A.; Glas, Cornelis A.W.
2000-01-01
This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved
Error sources in atomic force microscopy for dimensional measurements: Taxonomy and modeling
DEFF Research Database (Denmark)
Marinello, F.; Voltan, A.; Savio, E.
2010-01-01
: scanning system, tip-surface interaction, environment, and data processing. The discussed errors include scaling effects, squareness errors, hysteresis, creep, tip convolution, and thermal drift. A mathematical model of the measurement system is eventually described, as a reference basis for errors...
Chegwidden, O.; Nijssen, B.; Pytlak, E.
2017-12-01
Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us
Simulation Model for Correction and Modeling of Probe Head Errors in Five-Axis Coordinate Systems
Directory of Open Access Journals (Sweden)
Adam Gąska
2016-05-01
Full Text Available Simulative methods are nowadays frequently used in metrology for the simulation of measurement uncertainty and the prediction of errors that may occur during measurements. In coordinate metrology, such methods are primarily used with the typical three-axis Coordinate Measuring Machines (CMMs, and lately, also with mobile measuring systems. However, no similar simulative models have been developed for five-axis systems in spite of their growing popularity in recent years. This paper presents the numerical model of probe head errors for probe heads that are used in five-axis coordinate systems. The model is based on measurements of material standards (standard ring and the use of the Monte Carlo method combined with select interpolation methods. The developed model may be used in conjunction with one of the known models of CMM kinematic errors to form a virtual model of a five-axis coordinate system. In addition, the developed methodology allows for the correction of identified probe head errors, thus improving measurement accuracy. Subsequent verification tests prove the correct functioning of the presented model.
Spatio‐temporal analysis and modeling of short‐term wind power forecast errors
DEFF Research Database (Denmark)
Tastu, Julija; Pinson, Pierre; Kotwa, Ewelina
2011-01-01
for the spatio‐temporal dependencies observed in the wind generation field. However, it is intuitively expected that, owing to the inertia of meteorological forecasting systems, a forecast error made at a given point in space and time will be related to forecast errors at other points in space in the following...... of small size like western Denmark, significant correlation between the various zones is observed for time delays up to 5 h. Wind direction is shown to play a crucial role, while the effect of wind speed is more complex. Nonlinear models permitting capture of the interdependence structure of wind power...... period. The existence of such underlying correlation patterns is demonstrated and analyzed in this paper, considering the case‐study of western Denmark. The effects of prevailing wind speed and direction on autocorrelation and cross‐correlation patterns are thoroughly described. For a flat terrain region...
Model-observer similarity, error modeling and social learning in rhesus macaques.
Directory of Open Access Journals (Sweden)
Elisabetta Monfardini
Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.
De-noising of GPS structural monitoring observation error using wavelet analysis
Directory of Open Access Journals (Sweden)
Mosbeh R. Kaloop
2016-03-01
Full Text Available In the process of the continuous monitoring of the structure's state properties such as static and dynamic responses using Global Positioning System (GPS, there are unavoidable errors in the observation data. These GPS errors and measurement noises have their disadvantages in the precise monitoring applications because these errors cover up the available signals that are needed. The current study aims to apply three methods, which are used widely to mitigate sensor observation errors. The three methods are based on wavelet analysis, namely principal component analysis method, wavelet compressed method, and the de-noised method. These methods are used to de-noise the GPS observation errors and to prove its performance using the GPS measurements which are collected from the short-time monitoring system designed for Mansoura Railway Bridge located in Egypt. The results have shown that GPS errors can effectively be removed, while the full-movement components of the structure can be extracted from the original signals using wavelet analysis.
Directory of Open Access Journals (Sweden)
Jakub Vašek
Full Text Available An analysis of the population structure and genetic diversity for any organism often depends on one or more molecular marker techniques. Nonetheless, these techniques are not absolutely reliable because of various sources of errors arising during the genotyping process. Thus, a complex analysis of genotyping error was carried out with the AFLP method in 169 samples of the oil seed plant Plukenetia volubilis L. from small isolated subpopulations in the Peruvian Amazon. Samples were collected in nine localities from the region of San Martin. Analysis was done in eight datasets with a genotyping error from 0 to 5%. Using eleven primer combinations, 102 to 275 markers were obtained according to the dataset. It was found that it is only possible to obtain the most reliable and robust results through a multiple-level filtering process. Genotyping error and software set up influence both the estimation of population structure and genetic diversity, where in our case population number (K varied between 2-9 depending on the dataset and statistical method used. Surprisingly, discrepancies in K number were caused more by statistical approaches than by genotyping errors themselves. However, for estimation of genetic diversity, the degree of genotyping error was critical because descriptive parameters (He, FST, PLP 5% varied substantially (by at least 25%. Due to low gene flow, P. volubilis mostly consists of small isolated subpopulations (ΦPT = 0.252-0.323 with some degree of admixture given by socio-economic connectivity among the sites; a direct link between the genetic and geographic distances was not confirmed. The study illustrates the successful application of AFLP to infer genetic structure in non-model plants.
Vašek, Jakub; Hlásná Čepková, Petra; Viehmannová, Iva; Ocelák, Martin; Cachique Huansi, Danter; Vejl, Pavel
2017-01-01
An analysis of the population structure and genetic diversity for any organism often depends on one or more molecular marker techniques. Nonetheless, these techniques are not absolutely reliable because of various sources of errors arising during the genotyping process. Thus, a complex analysis of genotyping error was carried out with the AFLP method in 169 samples of the oil seed plant Plukenetia volubilis L. from small isolated subpopulations in the Peruvian Amazon. Samples were collected in nine localities from the region of San Martin. Analysis was done in eight datasets with a genotyping error from 0 to 5%. Using eleven primer combinations, 102 to 275 markers were obtained according to the dataset. It was found that it is only possible to obtain the most reliable and robust results through a multiple-level filtering process. Genotyping error and software set up influence both the estimation of population structure and genetic diversity, where in our case population number (K) varied between 2-9 depending on the dataset and statistical method used. Surprisingly, discrepancies in K number were caused more by statistical approaches than by genotyping errors themselves. However, for estimation of genetic diversity, the degree of genotyping error was critical because descriptive parameters (He, FST, PLP 5%) varied substantially (by at least 25%). Due to low gene flow, P. volubilis mostly consists of small isolated subpopulations (ΦPT = 0.252-0.323) with some degree of admixture given by socio-economic connectivity among the sites; a direct link between the genetic and geographic distances was not confirmed. The study illustrates the successful application of AFLP to infer genetic structure in non-model plants.
Structural brain differences in school-age children with residual speech sound errors.
Preston, Jonathan L; Molfese, Peter J; Mencl, W Einar; Frost, Stephen J; Hoeft, Fumiko; Fulbright, Robert K; Landi, Nicole; Grigorenko, Elena L; Seki, Ayumi; Felsenfeld, Susan; Pugh, Kenneth R
2014-01-01
The purpose of the study was to identify structural brain differences in school-age children with residual speech sound errors. Voxel based morphometry was used to compare gray and white matter volumes for 23 children with speech sound errors, ages 8;6-11;11, and 54 typically speaking children matched on age, oral language, and IQ. We hypothesized that regions associated with production and perception of speech sounds would differ between groups. Results indicated greater gray matter volumes for the speech sound error group relative to typically speaking controls in bilateral superior temporal gyrus. There was greater white matter volume in the corpus callosum for the speech sound error group, but less white matter volume in right lateral occipital gyrus. Results may indicate delays in neuronal pruning in critical speech regions or differences in the development of networks for speech perception and production. Copyright © 2013 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Maggi Kelly
2017-12-01
Full Text Available Light detection and ranging (Lidar data can be used to create wall-to-wall forest structure and fuel products that are required for wildfire behavior simulation models. We know that Lidar-derived forest parameters have a non-negligible error associated with them, yet we do not know how this error influences the results of fire behavior modeling that use these layers as inputs. Here, we evaluated the influence of error associated with two Lidar data products—canopy height (CH and canopy base height (CBH—on simulated fire behavior in a case study in the Sierra Nevada, California, USA. We used a Monte Carlo simulation approach with expected randomized error added to each model input. Model 1 used the original, unmodified data, Model 2 incorporated error in the CH layer, and Model 3 incorporated error in the CBH layer. This sensitivity analysis showed that error in CH and CBH did not greatly influence the modeled conditional burn probability, fire size, or fire size distribution. We found that the expected error associated with CH and CBH did not greatly influence modeled results: conditional burn probability, fire size, and fire size distributions were very similar between Model 1 (original data, Model 2 (error added to CH, and Model 3 (error added to CBH. However, the impact of introduced error was more pronounced with CBH than with CH, and at lower canopy heights, the addition of error increased modeled canopy burn probability. Our work suggests that the use of Lidar data, even with its inherent error, can contribute to reliable and robust estimates of modeled forest fire behavior, and forest managers should be confident in using Lidar data products in their fire behavior modeling workflow.
Modelo de error en imágenes comprimidas con wavelets Error Model in Wavelet-compressed Images
Directory of Open Access Journals (Sweden)
Gloria Puetamán G.
2007-06-01
Full Text Available En este artículo se presenta la compresión de imágenes a través de la comparación entre el modelo Wavelet y el modelo Fourier, utilizando la minimización de la función de error. El problema que se estudia es específico, consiste en determinar una base {ei} que minimice la función de error entre la imagen original y la recuperada después de la compresión. Es de resaltar que existen muchas aplicaciones, por ejemplo, en medicina o astronomía, en donde no es aceptable ningún deterioro de la imagen porque toda la información contenida, incluso la que se estima como ruido, se considera imprescindible.In this paper we study image compression as a way to compare Wavelet and Fourier models, by minimizing the error function. The particular problem we consider is to determine basis {ei} minimizing the error function between the original image and the recovered one after compression. It is to be noted or remarked that there are many applications in such diverse ﬁelds as for example medicine and astronomy, where no image deteriorating is acceptable since even noise is considered essential.
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2013-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
A practical guideline for human error assessment: A causal model
Ayele, Y. Z.; Barabadi, A.
2017-12-01
To meet the availability target and reduce system downtime, effective maintenance have a great importance. However, maintenance performance is greatly affected in complex ways by human factors. Hence, to have an effective maintenance operation, these factors needs to be assessed and quantified. To avoid the inadequacies of traditional human error assessment (HEA) approaches, the application of Bayesian Networks (BN) is gaining popularity. The main purpose of this paper is to propose a HEA framework based on the BN for maintenance operation. The proposed framework aids for assessing the effects of human performance influencing factors on the likelihood of human error during maintenance activities. Further, the paper investigates how operational issues must be considered in system failure-rate analysis, maintenance planning, and prediction of human error in pre- and post-maintenance operations. The goal is to assess how performance monitoring and evaluation of human factors can effect better operation and maintenance.
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2015-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John
2015-06-01
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Energy Technology Data Exchange (ETDEWEB)
Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)
2015-06-23
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.
Impact of operational model nesting approaches and inherent errors for coastal simulations
Brown, Jennifer M.; Norman, Danielle L.; Amoudry, Laurent O.; Souza, Alejandro J.
2016-11-01
A region of freshwater influence (ROFI) under hypertidal conditions is used to demonstrate inherent problems for nested operational modelling systems. Such problems can impact the accurate simulation of freshwater export within shelf seas, so must be considered in coastal ocean modelling studies. In Liverpool Bay (our UK study site), freshwater inflow from 3 large estuaries forms a coastal front that moves in response to tides and winds. The cyclic occurrence of stratification and remixing is important for the biogeochemical cycles, as nutrient and pollutant loaded freshwater is introduced into the coastal system. Validation methods, using coastal observations from fixed moorings and cruise transects, are used to assess the simulation of the ROFI, through improved spatial structure and temporal variability of the front, as guidance for best practise model setup. A structured modelling system using a 180 m grid nested within a 1.8 km grid demonstrates how compensation for error at the coarser resolution can have an adverse impact on the nested, high resolution application. Using 2008, a year of typical calm and stormy periods with variable river influence, the sensitivities of the ROFI dynamics to initial and boundary conditions are investigated. It is shown that accurate representation of the initial water column structure is important at the regional scale and that the boundary conditions are most important at the coastal scale. Although increased grid resolution captures the frontal structure, the accuracy in frontal position is determined by the offshore boundary conditions and therefore the accuracy of the coarser regional model.
Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools
Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu
2018-03-01
Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.
Scipal, K.; Holmes, T.R.H.; de Jeu, R.A.M.; Naeimi, V.; Wagner, W.W.
2008-01-01
In the last few years, research made significant progress towards operational soil moisture remote sensing which lead to the availability of several global data sets. For an optimal use of these data, an accurate estimation of the error structure is an important condition. To solve for the
DEFF Research Database (Denmark)
Ohlrich, Mogens; Henriksen, Eigil; Laugesen, Søren
1997-01-01
Uncertainties in power measurements performed with piezoelectric accelerometers and force transducers are investigated. It is shown that the inherent structural damping of the transducers is responsible for a bias phase error, which typically is in the order of one degree. Fortunately, such bias ...
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Error assessment of digital elevation models obtained by interpolation
Directory of Open Access Journals (Sweden)
Jean François Mas
2009-10-01
Full Text Available Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE. Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representación de la superficie terrestre para el análisis del terreno y que son ampliamente utilizados en ciencias ambientales. Los resultados obtenidos muestran que la resolución, el método de interpolación y los insumos (curvas de nivel solas o con datos de escurrimientos y puntos acotados influyen de manera importante en la magnitud de la cantidad de los errores generados en el MDE. En este estudio, que se llevó a cabo con base en curvas de nivel cada 50 m en una zona montañosa, la resolución más idónea fue de 30 m. El MDE con el menor error (Error Medio Cuadrático −EMC− de 7.3 m fue obtenido con ARC/INFO. Sin embargo, programas sin costo como NEWMIEL o ILWIS permitieron la obtención de resultados con un EMC de 10 m.
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Mrs. Malaprop’s Neighborhood: Using Word Errors to Reveal Neighborhood Structure
Goldrick, Matthew; Folk, Jocelyn R.; Rapp, Brenda
2009-01-01
Many theories of language production and perception assume that in the normal course of processing a word, additional non-target words (lexical neighbors) become active. The properties of these neighbors can provide insight into the structure of representations and processing mechanisms in the language processing system. To infer the properties of neighbors, we examined the non-semantic errors produced in both spoken and written word production by four individuals who suffered neurological injury. Using converging evidence from multiple language tasks, we first demonstrate that the errors originate in disruption to the processes involved in the retrieval of word form representations from long-term memory. The targets and errors produced were then examined for their similarity along a number of dimensions. A novel statistical simulation procedure was developed to determine the significance of the observed similarities between targets and errors relative to multiple chance baselines. The results reveal that in addition to position-specific form overlap (the only consistent claim of traditional definitions of neighborhood structure) the dimensions of lexical frequency, grammatical category, target length and initial segment independently contribute to the activation of non-target words in both spoken and written production. Additional analyses confirm the relevance of these dimensions for word production showing that, in both written and spoken modalities, the retrieval of a target word is facilitated by increasing neighborhood density, as defined by the results of the target-error analyses. PMID:20161591
Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors
Francois-Éric Racicot; Raymond Théoret; Alain Coen
2006-01-01
In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.
Role-modeling and medical error disclosure: a national survey of trainees.
Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani
2014-03-01
To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P error (OR 1.37, 95% CI 1.15-1.64; P errors. Negative role models may be a significant impediment to disclosure among trainees.
Detection of overlay error in double patterning gratings using phase-structured illumination.
Peterhänsel, Sandy; Gödecke, Maria Laura; Paz, Valeriano Ferreras; Frenner, Karsten; Osten, Wolfgang
2015-09-21
With the help of simulations we study the benefits of using coherent, phase-structured illumination to detect the overlay error in resist gratings fabricated by double patterning. Evaluating the intensity and phase distribution along the focused spot of a high numerical aperture microscope, the capability of detecting magnitude and direction of overlay errors in the range of a few nanometers is investigated for a wide range of gratings. Furthermore, two measurement approaches are presented and tested for their reliability in the presence of white Gaussian noise.
Modeling Dynamics of Wikipedia: An Empirical Analysis Using a Vector Error Correction Model
Directory of Open Access Journals (Sweden)
Liu Feng-Jun
2017-01-01
Full Text Available In this paper, we constructed a system dynamic model of Wikipedia based on the co-evolution theory, and investigated the interrelationships among topic popularity, group size, collaborative conflict, coordination mechanism, and information quality by using the vector error correction model (VECM. This study provides a useful framework for analyzing the dynamics of Wikipedia and presents a formal exposition of the VECM methodology in the information system research.
Hwang, Jinsang; Yun, Hongsik; Suh, Yongcheol; Cho, Jeongho; Lee, Dongha
2012-09-25
This study developed a smartphone application that provides wireless communication, NRTIP client, and RTK processing features, and which can simplify the Network RTK-GPS system while reducing the required cost. A determination method for an error model in Network RTK measurements was proposed, considering both random and autocorrelation errors, to accurately calculate the coordinates measured by the application using state estimation filters. The performance evaluation of the developed application showed that it could perform high-precision real-time positioning, within several centimeters of error range at a frequency of 20 Hz. A Kalman Filter was applied to the coordinates measured from the application, to evaluate the appropriateness of the determination method for an error model, as proposed in this study. The results were more accurate, compared with those of the existing error model, which only considered the random error.
Error assessment of digital elevation models obtained by interpolation
Jean François Mas; Azucena Pérez Vega
2009-01-01
Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE). Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL) y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representació...
Analysis of Error Propagation Within Hierarchical Air Combat Models
2016-06-01
of the factors (variables), the other variables were fixed at their baseline levels. The red dots with the standard deviation error bars represent...conducted an analysis to determine if the means and variances of MOEs of interest were statistically different by experimental design (Pav, 2015). To do...summarized data. In the summarized data set, we summarize each Design Point (DP) by its mean and standard deviation , over the stochastic replications. The
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
On the Influence of Weather Forecast Errors in Short-Term Load Forecasting Models
Fay, D.; Ringwood, John; Condon, M.
2004-01-01
Weather information is an important factor in load forecasting models. This weather information usually takes the form of actual weather readings. However, online operation of load forecasting models requires the use of weather forecasts, with associated weather forecast errors. A technique is proposed to model weather forecast errors to reflect current accuracy. A load forecasting model is then proposed which combines the forecasts of several load forecasting models. This approach allows the...
Analysis of errors in spectral reconstruction with a Laplace transform pair model
International Nuclear Information System (INIS)
Archer, B.R.; Bushong, S.C.
1985-01-01
The sensitivity of a Laplace transform pair model for spectral reconstruction to random errors in attenuation measurements of diagnostic x-ray units has been investigated. No spectral deformation or significant alteration resulted from the simulated attenuation errors. It is concluded that the range of spectral uncertainties to be expected from the application of this model is acceptable for most scientific applications. (author)
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
International Nuclear Information System (INIS)
Shadday, Martin A. Jr.
1997-01-01
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated
Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model
Directory of Open Access Journals (Sweden)
N. Rahpoe
2013-10-01
Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.
Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.
2017-06-01
In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.
Directory of Open Access Journals (Sweden)
Volodymyr Kharchenko
2017-03-01
Full Text Available Purpose: the aim of this study is to research applied models of air traffic controllers’ errors prevention in terminal control areas (TMA under uncertainty conditions. In this work the theoretical framework descripting safety events and errors of air traffic controllers connected with the operations in TMA is proposed. Methods: optimisation of terminal control area formal description based on the Threat and Error management model and the TMA network model of air traffic flows. Results: the human factors variables associated with safety events in work of air traffic controllers under uncertainty conditions were obtained. The Threat and Error management model application principles to air traffic controller operations and the TMA network model of air traffic flows were proposed. Discussion: Information processing context for preventing air traffic controller errors, examples of threats in work of air traffic controllers, which are relevant for TMA operations under uncertainty conditions.
Orthogonality of the Mean and Error Distribution in Generalized Linear Models.
Huang, Alan; Rathouz, Paul J
2017-01-01
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution. This generalizes some well-known results for the special cases of normal, gamma and multinomial regression models, and, perhaps more interestingly, suggests that asymptotically efficient estimation and inferences can always be obtained if the error distribution is nonparametrically estimated along with the mean. In contrast, estimation and inferences using misspecified error distributions or variance functions are generally not efficient.
On low-frequency errors of uniformly modulated filtered white-noise models for ground motions
Safak, Erdal; Boore, David M.
1988-01-01
Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).
Directory of Open Access Journals (Sweden)
Roque Calvo
2016-09-01
Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors
Carrera, J.; Pool, M.
2014-12-01
Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on
Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew
2017-11-01
Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion
OOK power model based dynamic error testing for smart electricity meter
Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu
2017-02-01
This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Robust estimation of errors-in-variables models using M-estimators
Guo, Cuiping; Peng, Junhuan
2017-07-01
The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...
Wagner, Sean
2014-01-01
The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.
International Nuclear Information System (INIS)
1997-01-01
This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ''Internationalization and structural change in the gas market'' aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-31
This report documents a numerical simulation model of the natural gas market in Germany, France, the Netherlands and Belgium. It is a part of a project called ``Internationalization and structural change in the gas market`` aiming to enhance the understanding of the factors behind the current and upcoming changes in the European gas market, especially the downstream part of the gas chain. The model takes European border prices of gas as given, adds transmission and distribution cost and profit margins as well as gas taxes to calculate gas prices. The model includes demand sub-models for households, chemical industry, other industry, the commercial sector and electricity generation. Demand responses to price changes are assumed to take time, and the long run effects are significantly larger than the short run effects. For the household sector and the electricity sector, the dynamics are modeled by distinguishing between energy use in the old and new capital stock. In addition to prices and the activity level (GDP), the model includes the extension of the gas network as a potentially important variable in explaining the development of gas demand. The properties of numerical simulation models are often described by dynamic multipliers, which describe the behaviour of important variables when key explanatory variables are changed. At the end, the report shows the results of a model experiment where the costs in transmission and distribution were reduced. 6 refs., 9 figs., 1 tab.
Error propagation of partial least squares for parameters optimization in NIR modeling
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Christensen, Nikolaj K; Minsley, Burke J.; Christensen, Steen
2017-01-01
We present a new methodology to combine spatially dense high-resolution airborne electromagnetic (AEM) data and sparse borehole information to construct multiple plausible geological structures using a stochastic approach. The method developed allows for quantification of the performance of groundwater models built from different geological realizations of structure. Multiple structural realizations are generated using geostatistical Monte Carlo simulations that treat sparse borehole lithological observations as hard data and dense geophysically derived structural probabilities as soft data. Each structural model is used to define 3-D hydrostratigraphical zones of a groundwater model, and the hydraulic parameter values of the zones are estimated by using nonlinear regression to fit hydrological data (hydraulic head and river discharge measurements). Use of the methodology is demonstrated for a synthetic domain having structures of categorical deposits consisting of sand, silt, or clay. It is shown that using dense AEM data with the methodology can significantly improve the estimated accuracy of the sediment distribution as compared to when borehole data are used alone. It is also shown that this use of AEM data can improve the predictive capability of a calibrated groundwater model that uses the geological structures as zones. However, such structural models will always contain errors because even with dense AEM data it is not possible to perfectly resolve the structures of a groundwater system. It is shown that when using such erroneous structures in a groundwater model, they can lead to biased parameter estimates and biased model predictions, therefore impairing the model's predictive capability.
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
SLC beam line error analysis using a model-based expert system
International Nuclear Information System (INIS)
Lee, M.; Kleban, S.
1988-02-01
Commissioning particle beam line is usually a very time-consuming and labor-intensive task for accelerator physicists. To aid in commissioning, we developed a model-based expert system that identifies error-free regions, as well as localizing beam line errors. This paper will give examples of the use of our system for the SLC commissioning. 8 refs., 5 figs
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
Hawkins, C Matthew; Hall, Seth; Zhang, Bin; Towbin, Alexander J
2014-10-01
The purpose of this study was to evaluate and compare textual error rates and subtypes in radiology reports before and after implementation of department-wide structured reports. Randomly selected radiology reports that were generated following the implementation of department-wide structured reports were evaluated for textual errors by two radiologists. For each report, the text was compared to the corresponding audio file. Errors in each report were tabulated and classified. Error rates were compared to results from a prior study performed prior to implementation of structured reports. Calculated error rates included the average number of errors per report, average number of nongrammatical errors per report, the percentage of reports with an error, and the percentage of reports with a nongrammatical error. Identical versions of voice-recognition software were used for both studies. A total of 644 radiology reports were randomly evaluated as part of this study. There was a statistically significant reduction in the percentage of reports with nongrammatical errors (33 to 26%; p = 0.024). The likelihood of at least one missense omission error (omission errors that changed the meaning of a phrase or sentence) occurring in a report was significantly reduced from 3.5 to 1.2% (p = 0.0175). A statistically significant reduction in the likelihood of at least one comission error (retained statements from a standardized report that contradict the dictated findings or impression) occurring in a report was also observed (3.9 to 0.8%; p = 0.0007). Carefully constructed structured reports can help to reduce certain error types in radiology reports.
Differential measurement errors in zero-truncated regression models for count data.
Huang, Yih-Huei; Hwang, Wen-Han; Chen, Fei-Yin
2011-12-01
Measurement errors in covariates may result in biased estimates in regression analysis. Most methods to correct this bias assume nondifferential measurement errors-i.e., that measurement errors are independent of the response variable. However, in regression models for zero-truncated count data, the number of error-prone covariate measurements for a given observational unit can equal its response count, implying a situation of differential measurement errors. To address this challenge, we develop a modified conditional score approach to achieve consistent estimation. The proposed method represents a novel technique, with efficiency gains achieved by augmenting random errors, and performs well in a simulation study. The method is demonstrated in an ecology application. © 2011, The International Biometric Society.
Dynamic modeling of predictive uncertainty by regression on absolute errors
Pianosi, F.; Raso, L.
2012-01-01
Uncertainty of hydrological forecasts represents valuable information for water managers and hydrologists. This explains the popularity of probabilistic models, which provide the entire distribution of the hydrological forecast. Nevertheless, many existing hydrological models are deterministic and
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-03-13
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
Energy Technology Data Exchange (ETDEWEB)
Kalet, A; Phillips, M; Gennari, J [UniversityWashington, Seattle, WA (United States)
2014-06-01
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the
SU-E-T-51: Bayesian Network Models for Radiotherapy Error Detection
International Nuclear Information System (INIS)
Kalet, A; Phillips, M; Gennari, J
2014-01-01
Purpose: To develop a probabilistic model of radiotherapy plans using Bayesian networks that will detect potential errors in radiation delivery. Methods: Semi-structured interviews with medical physicists and other domain experts were employed to generate a set of layered nodes and arcs forming a Bayesian Network (BN) which encapsulates relevant radiotherapy concepts and their associated interdependencies. Concepts in the final network were limited to those whose parameters are represented in the institutional database at a level significant enough to develop mathematical distributions. The concept-relation knowledge base was constructed using the Web Ontology Language (OWL) and translated into Hugin Expert Bayes Network files via the the RHugin package in the R statistical programming language. A subset of de-identified data derived from a Mosaiq relational database representing 1937 unique prescription cases was processed and pre-screened for errors and then used by the Hugin implementation of the Estimation-Maximization (EM) algorithm for machine learning all parameter distributions. Individual networks were generated for each of several commonly treated anatomic regions identified by ICD-9 neoplasm categories including lung, brain, lymphoma, and female breast. Results: The resulting Bayesian networks represent a large part of the probabilistic knowledge inherent in treatment planning. By populating the networks entirely with data captured from a clinical oncology information management system over the course of several years of normal practice, we were able to create accurate probability tables with no additional time spent by experts or clinicians. These probabilistic descriptions of the treatment planning allow one to check if a treatment plan is within the normal scope of practice, given some initial set of clinical evidence and thereby detect for potential outliers to be flagged for further investigation. Conclusion: The networks developed here support the
Johnson, Brian R; Atallah, Joel; Plachetzki, David C
2013-08-28
A composite biological structure, such as an insect head or abdomen, contains many internal structures with distinct functions. Composite structures are often used in RNA-seq studies, though it is unclear how expression of the same gene in different tissues and structures within the same structure affects the measurement (or even utility) of the resulting patterns of gene expression. Here we determine how complex composite tissue structure affects measures of gene expression using RNA-seq. We focus on two structures in the honey bee (the sting gland and digestive tract) both contained within one larger structure, the whole abdomen. For each of the three structures, we used RNA-seq to identify differentially expressed genes between two developmental stages, nurse bees and foragers. Based on RNA-seq for each structure-specific extraction, we found that RNA-seq with composite structures leads to many false negatives (genes strongly differentially expressed in particular structures which are not found to be differentially expressed within the composite structure). We also found a significant number of genes with one pattern of differential expression in the tissue-specific extraction, and the opposite in the composite extraction, suggesting multiple signals from such genes within the composite structure. We found these patterns for different classes of genes including transcription factors. Many RNA-seq studies currently use composite extractions, and even whole insect extractions, when tissue and structure specific extractions are possible. This is due to the logistical difficultly of micro-dissection and unawareness of the potential errors associated with composite extractions. The present study suggests that RNA-seq studies of composite structures are prone to false negatives and difficult to interpret positive signals for genes with variable patterns of local expression. In general, our results suggest that RNA-seq on large composite structures should be avoided
On the asymptotic ergodic capacity of FSO links with generalized pointing error model
Al-Quwaiee, Hessa
2015-09-11
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
Building a Structural Model: Parameterization and Structurality
Directory of Open Access Journals (Sweden)
Michel Mouchart
2016-04-01
Full Text Available A specific concept of structural model is used as a background for discussing the structurality of its parameterization. Conditions for a structural model to be also causal are examined. Difficulties and pitfalls arising from the parameterization are analyzed. In particular, pitfalls when considering alternative parameterizations of a same model are shown to have lead to ungrounded conclusions in the literature. Discussions of observationally equivalent models related to different economic mechanisms are used to make clear the connection between an economically meaningful parameterization and an economically meaningful decomposition of a complex model. The design of economic policy is used for drawing some practical implications of the proposed analysis.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Directory of Open Access Journals (Sweden)
Laura Ruotsalainen
2018-02-01
Full Text Available The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU, sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF, which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is
Directory of Open Access Journals (Sweden)
Yun Shi
2014-01-01
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
Wild, Oliver; Prather, Michael J
2006-01-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quant...
Specification test for Markov models with measurement errors.
Kim, Seonjin; Zhao, Zhibiao
2014-09-01
Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band.
Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.
2006-01-01
Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.
Impact of sensor and measurement timing errors on model-based insulin sensitivity.
Pretty, Christopher G; Signal, Matthew; Fisk, Liam; Penning, Sophie; Le Compte, Aaron; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2014-05-01
A model-based insulin sensitivity parameter (SI) is often used in glucose-insulin system models to define the glycaemic response to insulin. As a parameter identified from clinical data, insulin sensitivity can be affected by blood glucose (BG) sensor error and measurement timing error, which can subsequently impact analyses or glycaemic variability during control. This study assessed the impact of both measurement timing and BG sensor errors on identified values of SI and its hour-to-hour variability within a common type of glucose-insulin system model. Retrospective clinical data were used from 270 patients admitted to the Christchurch Hospital ICU between 2005 and 2007 to identify insulin sensitivity profiles. We developed error models for the Abbott Optium Xceed glucometer and measurement timing from clinical data. The effect of these errors on the re-identified insulin sensitivity was investigated by Monte-Carlo analysis. The results of the study show that timing errors in isolation have little clinically significant impact on identified SI level or variability. The clinical impact of changes to SI level induced by combined sensor and timing errors is likely to be significant during glycaemic control. Identified values of SI were mostly (90th percentile) within 29% of the true value when influenced by both sources of error. However, these effects may be overshadowed by physiological factors arising from the critical condition of the patients or other under-modelled or un-modelled dynamics. Thus, glycaemic control protocols that are designed to work with data from glucometers need to be robust to these errors and not be too aggressive in dosing insulin. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Modeling the probability distribution of positional errors incurred by residential address geocoding
Directory of Open Access Journals (Sweden)
Mazumdar Soumya
2007-01-01
Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...
Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates f...
Structural system identification: Structural dynamics model validation
Energy Technology Data Exchange (ETDEWEB)
Red-Horse, J.R.
1997-04-01
Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei
2014-06-01
Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.
On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model
Al-Quwaiee, Hessa
2016-06-28
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.
Error statistics of hidden Markov model and hidden Boltzmann model results
Directory of Open Access Journals (Sweden)
Newberg Lee A
2009-07-01
Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.
International Nuclear Information System (INIS)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.
2015-01-01
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets
Laurier, Dominique; Rage, Estelle
2018-01-01
Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Directory of Open Access Journals (Sweden)
Deepak Bhatt
2012-07-01
Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.
Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales
International Nuclear Information System (INIS)
Krstic, Predrag S.
2014-01-01
Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.
2012-09-30
atmospheric models and the chaotic growth of initial-condition (IC) error. The aim of our work is to provide new methods that begin to systematically disentangle the model inadequacy signal from the initial condition error signal.
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
The importance of time-stepping errors in ocean models
Williams, P. D.
2011-12-01
Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.
Error Modeling and Design Optimization of Parallel Manipulators
DEFF Research Database (Denmark)
Wu, Guanglei
challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... theory and virtual spring approach, a general kinetostatic model of the spherical parallel manipulators is developed and validated with Finite Element approach. This model is applied to the stiness analysis of a special spherical parallel manipulator with unlimited rolling motion and the obtained stiness...
Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S
2015-02-01
We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.
Directory of Open Access Journals (Sweden)
Jianli Li
2014-01-01
Full Text Available The position and orientation system (POS is a key equipment for airborne remote sensing systems, which provides high-precision position, velocity, and attitude information for various imaging payloads. Temperature error is the main source that affects the precision of POS. Traditional temperature error model is single temperature parameter linear function, which is not sufficient for the higher accuracy requirement of POS. The traditional compensation method based on neural network faces great problem in the repeatability error under different temperature conditions. In order to improve the precision and generalization ability of the temperature error compensation for POS, a nonlinear multiparameters temperature error modeling and compensation method based on Bayesian regularization neural network was proposed. The temperature error of POS was analyzed and a nonlinear multiparameters model was established. Bayesian regularization method was used as the evaluation criterion, which further optimized the coefficients of the temperature error. The experimental results show that the proposed method can improve temperature environmental adaptability and precision. The developed POS had been successfully applied in airborne TSMFTIS remote sensing system for the first time, which improved the accuracy of the reconstructed spectrum by 47.99%.
Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.
Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep
2017-06-12
Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.
Performances of estimators of linear auto-correlated error model ...
African Journals Online (AJOL)
The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...
Aligned rank tests for the linear model with heteroscedastic errors
Albers, Willem/Wim; Akritas, Michael G.
1993-01-01
We consider the problem of testing subhypotheses in a heteroscedastic linear regression model. The proposed test statistics are based on the ranks of scaled residuals obtained under the null hypothesis. Any estimator that is n -consistent under the null hypothesis can be used to form the residuals.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.
Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A
2017-09-21
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.
Improved modeling of multivariate measurement errors based on the Wishart distribution.
Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M
2017-03-22
The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Labudde Dirk
2009-06-01
Full Text Available Abstract Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS. PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that
Identifiability and error minimization of receptor model parameters with PET
International Nuclear Information System (INIS)
Delforge, J.; Syrota, A.; Mazoyer, B.M.
1989-01-01
The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs
Directory of Open Access Journals (Sweden)
Zhuo Zhang
Full Text Available In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose, a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters.
Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce A.; Degteva, Marina; Moroz, Brian; Vostrotin, Vadim; Shiskina, Elena; Birchall, Alan; Stram, Daniel O.
2017-01-01
In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR) models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose), a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS) and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters. PMID:28369141
International Nuclear Information System (INIS)
Dixit, P.K.; Vaid, B.A.; Sharma, K.C.
1986-01-01
The structure disorder model, recently proposed to explain the thermodynamic properties near the transition of first order, is generalized to include the pressure-induced transitions in tetrahedrally coordinated tin and A/sup N/B/sup 8-N/ compounds (with N = 2, 3). For Sn the calculated values of the change in thermodynamic quantities during the transition are found to be closer to the experimental values. For A/sup N/B/sup 8-N/ compounds, the transition is explained in a satisfactory manner in terms of partial ionic bonds and covalent bonds. The change in compressibility near the transition is found to be in agreement with that obtained from experiments. (author)
Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures
Kaneko, Hideaki; Bey, Kim S.
2004-01-01
The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
A Temperature Sensor Clustering Method for Thermal Error Modeling of Heavy Milling Machine Tools
Directory of Open Access Journals (Sweden)
Fengchun Li
2017-01-01
Full Text Available A clustering method is an effective way to select the proper temperature sensor location for thermal error modeling of machine tools. In this paper, a new temperature sensor clustering method is proposed. By analyzing the characteristics of the temperature of the sensors in a heavy floor-type milling machine tool, an indicator involving both the Euclidean distance and the correlation coefficient was proposed to reflect the differences between temperature sensors, and the indicator was expressed by a distance matrix to be used for hierarchical clustering. Then, the weight coefficient in the distance matrix and the number of the clusters (groups were optimized by a genetic algorithm (GA, and the fitness function of the GA was also rebuilt by establishing the thermal error model at one rotation speed, then deriving its accuracy at two different rotation speeds with a temperature disturbance. Thus, the parameters for clustering, as well as the final selection of the temperature sensors, were derived. Finally, the method proposed in this paper was verified on a machine tool. According to the selected temperature sensors, a thermal error model of the machine tool was established and used to predict the thermal error. The results indicate that the selected temperature sensors can accurately predict thermal error at different rotation speeds, and the proposed temperature sensor clustering method for sensor selection is expected to be used for the thermal error modeling for other machine tools.
Wind Power Prediction Based on LS-SVM Model with Error Correction
Directory of Open Access Journals (Sweden)
ZHANG, Y.
2017-02-01
Full Text Available As conventional energy sources are non-renewable, the world's major countries are investing heavily in renewable energy research. Wind power represents the development trend of future energy, but the intermittent and volatility of wind energy are the main reasons that leads to the poor accuracy of wind power prediction. However, by analyzing the error level at different time points, it can be found that the errors of adjacent time are often approximately the same, the least square support vector machine (LS-SVM model with error correction is used to predict the wind power in this paper. According to the simulation of wind power data of two wind farms, the proposed method can effectively improve the prediction accuracy of wind power, and the error distribution is concentrated almost without deviation. The improved method proposed in this paper takes into account the error correction process of the model, which improved the prediction accuracy of the traditional model (RBF, Elman, LS-SVM. Compared with the single LS-SVM prediction model in this paper, the mean absolute error of the proposed method had decreased by 52 percent. The research work in this paper will be helpful to the reasonable arrangement of dispatching operation plan, the normal operation of the wind farm and the large-scale development as well as fully utilization of renewable energy resources.
Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education
Directory of Open Access Journals (Sweden)
Francisco Moreira
2009-11-01
Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.
Integrated materials–structural models
DEFF Research Database (Denmark)
Stang, Henrik; Geiker, Mette Rica
2008-01-01
of structural modelling and materials concepts will both operational in both identifying important research issues and in answering the ‘real’ needs of society. Integrated materials-structural models will allow synergy to develop between materials and structural research. On one side the structural modelling......Reliable service life models for load carrying structures are significant elements in the evaluation of the performance and sustainability of existing and new structures. Furthermore, reliable service life models are prerequisites for the evaluation of the sustainability of maintenance strategies...... should define a framework in which materials research results eventually should fit in and on the other side the materials research should define needs and capabilities in structural modelling. Integrated materials-structural models of a general nature are almost non-existent in the field of cement based...
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne
2018-03-01
When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.
The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded
DEFF Research Database (Denmark)
Hansen, Merete Kjær; Kulahci, Murat
the type I error rate is greater than the nominal _ at 0.05. Closed-form expressions based on scaled F-distributions using the Welch-Satterthwaite approximation are provided to show how the type I error rate is aUected. With this study we hope to motivate researchers to be more precise regarding......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...
Energy Technology Data Exchange (ETDEWEB)
Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-01
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic
DEFF Research Database (Denmark)
Achiche, Sofiane; Wozniak, A; Fan, Zhun
2008-01-01
FKBs based on two optimization paradigms are used for the reconstruction of the direction- dependent probe error w. The angles beta and gamma are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real......The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generated...
3D CMM Strain-Gauge Triggering Probe Error Characteristics Modeling
DEFF Research Database (Denmark)
Achiche, Sofiane; Wozniak, Adam; Fan, Zhun
2008-01-01
FKBs based on two optimization paradigms are used for the reconstruction of the directiondependent probe error w. The angles β and γ are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real/ binary like......The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generated...
MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX
Directory of Open Access Journals (Sweden)
Ivan M. Egorov
2014-11-01
Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and
Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool
Directory of Open Access Journals (Sweden)
Wenjie Tian
2014-01-01
Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Directory of Open Access Journals (Sweden)
E. Solazzo
2017-09-01
Full Text Available The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3 by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study, allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL dynamics is pivotal to both models. In particular, (i the fluctuations slower than ∼ 1.5 days account for 70–85 % of the mean square error of the full (undecomposed ozone time series; (ii a recursive, systematic error with daily periodicity is detected, responsible for 10–20 % of the quadratic total error; (iii errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network
Goodness-of-fit test in a multivariate errors-in-variables model $AX=B$
Kukush, Alexander; Tsaregorodtsev, Yaroslav
2016-01-01
We consider a multivariable functional errors-in-variables model $AX\\approx B$, where the data matrices $A$ and $B$ are observed with errors, and a matrix parameter $X$ is to be estimated. A goodness-of-fit test is constructed based on the total least squares estimator. The proposed test is asymptotically chi-squared under null hypothesis. The power of the test under local alternatives is discussed.
Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)
International Nuclear Information System (INIS)
Aljneibi, Hanan Salah Ali; Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun
2015-01-01
To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Lark, R. M.; Lawley, R. S.; Barron, A. J. M.; Aldiss, D. T.; Ambrose, K.; Cooper, A. H.; Lee, J. R.; Waters, C. N.
2015-06-01
It is generally accepted that geological line work, such as mapped boundaries, are uncertain for various reasons. It is difficult to quantify this uncertainty directly, because the investigation of error in a boundary at a single location may be costly and time consuming, and many such observations are needed to estimate an uncertainty model with confidence. However, it is recognized across many disciplines that experts generally have a tacit model of the uncertainty of information that they produce (interpretations, diagnoses, etc.) and formal methods exist to extract this model in usable form by elicitation. In this paper we report a trial in which uncertainty models for geological boundaries mapped by geologists of the British Geological Survey (BGS) in six geological scenarios were elicited from a group of five experienced BGS geologists. In five cases a consensus distribution was obtained, which reflected both the initial individually elicited distribution and a structured process of group discussion in which individuals revised their opinions. In a sixth case a consensus was not reached. This concerned a boundary between superficial deposits where the geometry of the contact is hard to visualize. The trial showed that the geologists' tacit model of uncertainty in mapped boundaries reflects factors in addition to the cartographic error usually treated by buffering line work or in written guidance on its application. It suggests that further application of elicitation, to scenarios at an appropriate level of generalization, could be useful to provide working error models for the application and interpretation of line work.
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
Error modeling of DEMs from topographic surveys of rivers using fuzzy inference systems
Bangen, Sara; Hensleigh, James; McHugh, Peter; Wheaton, Joseph
2016-02-01
Digital elevation models (DEMs) have become common place in the earth sciences as a tool to characterize surface topography and set modeling boundary conditions. All DEMs have a degree of inherent error that is propagated to subsequent models and analyses. While previous research has shown that DEM error is spatially variable it is often represented as spatially uniform for analytical simplicity. Fuzzy inference systems (FIS) offer a tractable approach for modeling spatially variable DEM error, including flexibility in the number of inputs and calibration of outputs based on survey technique and modeling environment. We compare three FIS error models for DEMs derived from TS surveys of wadeable streams and test them at 34 sites in the Columbia River basin. The models differ in complexity regarding the number/type of inputs and degree of site-specific parameterization. A 2-input FIS uses inputs derived from the topographic point cloud (slope, point density). A 4-input FIS adds interpolation error and 3-D point quality. The 5-input FIS adds bed-surface roughness estimates. Both the 4 and 5-input FIS model output were parameterized to site-specific values. In the wetted channel we found (i) the 5-input FIS resulted in lower mean δz due to including roughness, and (ii) the 4 and 5-input FIS resulted in a higher standard deviation and maximum δz due to the inclusion of site-specific bank heights. All three FIS gave plausible estimates of DEM error, with the two more complicated models offering an improvement in the ability to detect spatially localized areas of DEM uncertainty.
Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain
Directory of Open Access Journals (Sweden)
Yiwen Mei
2016-03-01
Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.
A system dynamic simulation model for managing the human error in power tools industries
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2017-10-01
In the era of modern and competitive life of today, every organization will face the situations in which the work does not proceed as planned when there is problems occur in which it had to be delay. However, human error is often cited as the culprit. The error that made by the employees would cause them have to spend additional time to identify and check for the error which in turn could affect the normal operations of the company as well as the company's reputation. Employee is a key element of the organization in running all of the activities of organization. Hence, work performance of the employees is a crucial factor in organizational success. The purpose of this study is to identify the factors that cause the increasing errors make by employees in the organization by using system dynamics approach. The broadly defined targets in this study are employees in the Regional Material Field team from purchasing department in power tools industries. Questionnaires were distributed to the respondents to obtain their perceptions on the root cause of errors make by employees in the company. The system dynamics model was developed to simulate the factor of the increasing errors make by employees and its impact. The findings of this study showed that the increasing of error make by employees was generally caused by the factors of workload, work capacity, job stress, motivation and performance of employees. However, this problem could be solve by increased the number of employees in the organization.
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
2009-01-01
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
Directory of Open Access Journals (Sweden)
M. Ridolfi
2014-12-01
Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair
Goodson, Troy D.
2013-01-01
The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.
Using surrogate biomarkers to improve measurement error models in nutritional epidemiology
Keogh, Ruth H; White, Ian R; Rodwell, Sheila A
2013-01-01
Nutritional epidemiology relies largely on self-reported measures of dietary intake, errors in which give biased estimated diet–disease associations. Self-reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet–disease associations. Challenges arise because there is no gold standard, and errors in self-reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet–disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7-day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self-reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet–disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self-reported measurements on observed diet–disease associations. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23553407
Murray, J. R.
2017-12-01
Earth surface displacements measured at Global Navigation Satellite System (GNSS) sites record crustal deformation due, for example, to slip on faults underground. A primary objective in designing geodetic networks to study crustal deformation is to maximize the ability to recover parameters of interest like fault slip. Given Green's functions (GFs) relating observed displacement to motion on buried dislocations representing a fault, one can use various methods to estimate spatially variable slip. However, assumptions embodied in the GFs, e.g., use of a simplified elastic structure, introduce spatially correlated model prediction errors (MPE) not reflected in measurement uncertainties (Duputel et al., 2014). In theory, selection algorithms should incorporate inter-site correlations to identify measurement locations that give unique information. I assess the impact of MPE on site selection by expanding existing methods (Klein et al., 2017; Reeves and Zhe, 1999) to incorporate this effect. Reeves and Zhe's algorithm sequentially adds or removes a predetermined number of data according to a criterion that minimizes the sum of squared errors (SSE) on parameter estimates. Adapting this method to GNSS network design, Klein et al. select new sites that maximize model resolution, using trade-off curves to determine when additional resolution gain is small. Their analysis uses uncorrelated data errors and GFs for a uniform elastic half space. I compare results using GFs for spatially variable strike slip on a discretized dislocation in a uniform elastic half space, a layered elastic half space, and a layered half space with inclusion of MPE. I define an objective criterion to terminate the algorithm once the next site removal would increase SSE more than the expected incremental SSE increase if all sites had equal impact. Using a grid of candidate sites with 8 km spacing, I find the relative value of the selected sites (defined by the percent increase in SSE that further
Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique
Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka
2016-06-01
Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.
Evaluation of parametric models by the prediction error in colorectal cancer survival analysis.
Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza
2015-01-01
The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error's technique. Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis.
PRODUCT STRUCTURE DIGITAL MODEL
Directory of Open Access Journals (Sweden)
V.M. Sineglazov
2005-02-01
Full Text Available Research results of representation of product structure made by means of CADDS5 computer-aided design (CAD system, Product Data Management Optegra (PDM system and Product Life Cycle Management Wind-chill system (PLM, are examined in this work. Analysis of structure component development and its storage in various systems is carried out. Algorithms of structure transformation required for correct representation of the structure are considered. Management analysis of electronic mockup presentation of the product structure is carried out for Windchill system.
Rater Stringency Error in Performance Rating: A Contrast of Three Models.
Cason, Gerald J.; Cason, Carolyn L.
The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…
Thermal Error Modeling of a Machine Tool Using Data Mining Scheme
Wang, Kun-Chieh; Tseng, Pai-Chang
In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.
Making the error-controlling algorithm of observable operator models constructive.
Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael
2009-12-01
Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
Covariate Measurement Error Adjustment for Multilevel Models with Application to Educational Data
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2011-01-01
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…
Compliance Modeling and Error Compensation of a 3-Parallelogram Lightweight Robotic Arm
DEFF Research Database (Denmark)
Wu, Guanglei; Guo, Sheng; Bai, Shaoping
2015-01-01
This paper presents compliance modeling and error compensation for lightweight robotic arms built with parallelogram linkages, i.e., Π joints. The Cartesian stiffness matrix is derived using the virtual joint method. Based on the developed stiffness model, a method to compensate the compliance...
Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric
2013-04-01
Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors
Structural dynamic modifications via models
Indian Academy of Sciences (India)
of structural dynamic optimization techniques. A review of structural optimization in vibratory environments is given by Rao (1989). 2. SDM techniques. SDM methods may be broadly divided into two groups. Those which employ a model of the structure and those that use dynamic test data directly. The model used by the ...
Modeling and Simulation on Errors of Feed Unit by Considering Change of Force Bearing Point
Directory of Open Access Journals (Sweden)
Zhang Wei
2017-01-01
Full Text Available The linear feed unit is a type of precision linear motion component that is widely used in computer numerical control (CNC machine tools. The contact stiffness and error influences the performance of the feed unit directly. Thus, investigating contact stiffness and error is important in optimizing the design and improving the performance of the linear feed unit. In this study, the contact mechanics and the deformation of the roller between the roller and rail are analyzed. Calculation model of the contact stiffness and error based on the Hertz theory and multi-body kinematics are established and the change of the contact angle is also considered. Errors and the contact stiffness curve of five directions and the changes of the slope of the stiffness curve after the load increases to a certain size are obtained. The Motion precision errors of The roller linear feed unit are analyzed. The effectiveness of the proposed models on contact stiffness and error is verified through simulation on a specialized test system of the linear feed unit
Asavaskulkiet, Krissada
2014-01-01
This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space. The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA). From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition, the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Reer, B.; Mertens, J.
1996-05-01
Actions and errors by the operating personnel, which are of significance for the safety of a technical system, are classified according to various criteria. Each type of action thus identified is roughly discussed with respect to its quantifiability by state-of-the-art human reliability analysis (HRA) within a probabilistic safety assessment (PSA). Thereby, the principal limit of quantifying human actions are discussed with special emphasis on data quality and cognitive error modelling. In this connection, the basic procedure for a HRA is briefly described under realistic conditions. With respect to the quantitative part of a HRA - the determination of error probabilities - an evaluating description of the standard method THERP (Technique of Human Error Rate Prediction) is given using eight evaluation criteria. Furthermore, six new developments (EdF'sPHRA, HCR, HCR/ORE, SLIM, HEART, INTENT) are briefly described and roughly evaluated. The report concludes with a catalogue of requirements for HRA methods. (orig.) [de
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The
Accuracy of devices for self-monitoring of blood glucose: A stochastic error model.
Vettoretti, M; Facchinetti, A; Sparacino, G; Cobelli, C
2015-01-01
Self-monitoring of blood glucose (SMBG) devices are portable systems that allow measuring glucose concentration in a small drop of blood obtained via finger-prick. SMBG measurements are key in type 1 diabetes (T1D) management, e.g. for tuning insulin dosing. A reliable model of SMBG accuracy would be important in several applications, e.g. in in silico design and optimization of insulin therapy. In the literature, the most used model to describe SMBG error is the Gaussian distribution, which however is simplistic to properly account for the observed variability. Here, a methodology to derive a stochastic model of SMBG accuracy is presented. The method consists in dividing the glucose range into zones in which absolute/relative error presents constant standard deviation (SD) and, then, fitting by maximum-likelihood a skew-normal distribution model to absolute/relative error distribution in each zone. The method was tested on a database of SMBG measurements collected by the One Touch Ultra 2 (Lifescan Inc., Milpitas, CA). In particular, two zones were identified: zone 1 (BG≤75 mg/dl) with constant-SD absolute error and zone 2 (BG>75mg/dl) with constant-SD relative error. Mean and SD of the identified skew-normal distributions are, respectively, 2.03 and 6.51 in zone 1, 4.78% and 10.09% in zone 2. Visual predictive check validation showed that the derived two-zone model accurately reproduces SMBG measurement error distribution, performing significantly better than the single-zone Gaussian model used previously in the literature. This stochastic model allows a more realistic SMBG scenario for in silico design and optimization of T1D insulin therapy.
International Nuclear Information System (INIS)
Decortis, F.; Drozdowicz, B.; Masson, M.
1990-01-01
In this paper the needs and requirements for developing a cognitive model of a human operator are discussed and the computer architecture, currently being developed, is described. Given the approach taken, namely the division of the problem into specialised tasks within an area and using the architecture chosen, it is possible to build independently several cognitive and psychological models such as errors and stress models, as well as models of temporal, qualitative and an analogical reasoning. (author)
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Linear mixed models for replication data to efficiently allow for covariate measurement error.
Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris
2009-11-10
It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.
Energy Technology Data Exchange (ETDEWEB)
Biswas, Dipankar, E-mail: diiibiswas@yahoo.co.in; Panda, Siddhartha [Institute of Radiophysics and Electronics, University of Calcutta, 92 A. P. C. Road, Kolkata 700009 (India)
2014-04-07
Experimental capacitance–voltage (C-V) profiling of semiconductor heterojunctions and quantum wells has remained ever important and relevant. The apparent carrier distributions (ACDs) thus obtained reveal the carrier depletions, carrier peaks and their positions, in and around the quantum structures. Inevitable errors, encountered in such measurements, are the deviations of the peak concentrations of the ACDs and their positions, from the actual carrier peaks obtained from quantum mechanical computations with the fundamental parameters. In spite of the very wide use of the C-V method, comprehensive discussions on the qualitative and quantitative nature of the errors remain wanting. The errors are dependent on the fundamental parameters, the temperature of measurements, the Debye length, and the series resistance. In this paper, the errors have been studied with doping concentration, band offset, and temperature. From this study, a rough estimate may be drawn about the error. It is seen that the error in the position of the ACD peak decreases at higher doping, higher band offset, and lower temperature, whereas the error in the peak concentration changes in a strange fashion. A completely new method is introduced, for derivation of the carrier profiles from C-V measurements on quantum structures to minimize errors which are inevitable in the conventional formulation.
Directory of Open Access Journals (Sweden)
A. Sigmund
2017-06-01
Full Text Available In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to −1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s−1 m−2 and very weak winds (0.1 m s−1. After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K. Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the
Evolutionary Naturalism and the Logical Structure of Valuation: The Other Side of Error Theory
Directory of Open Access Journals (Sweden)
Richard A Richards
2006-01-01
Full Text Available On one standard philosophical position adopted by evolutionary naturalists, human ethical systems are nothing more than evolutionary adaptations that facilitate social behavior. Belief in an absolute moral foundation is therefore in error. But evolutionary naturalism, by its commitment to the basic valutional concept of fitness, reveals another, logical error: standard conceptions of value in terms of simple predication and properties are mistaken. Valuation has instead, a relational structure that makes reference to respects, subjects and environments. This relational nature is illustrated by the analogy commonly drawn between value and color. Color perception, as recognized by the ecological concept, is relational and dependent on subject and environment. In a similar way, value is relational and dependent on subject and environment. This makes value subjective, but also objective in that it is grounded on facts about mattering. At bottom, values are complex relational facts. The view presented here, unlike other prominent relational and naturalistic conceptions of value, recognizes the full range of valuation in nature. The advantages of this relational conception are first, that it gets valuation right; second, it provides a framework to better explain and understand valuation in all its varieties and patterns.
Evolutionary Naturalism and the Logical Structure of Valuation: The Other Side of Error Theory
Directory of Open Access Journals (Sweden)
Richard A Richards
2005-01-01
Full Text Available On one standard philosophical position adopted by evolutionary naturalists, human ethical systems are nothing more than evolutionary adaptations that facilitate social behavior. Belief in an absolute moral foundation is therefore in error. But evolutionary naturalism, by its commitment to the basic valutional concept of fitness, reveals another, logical error: standard conceptions of value in terms of simple predication and properties are mistaken. Valuation has instead, a relational structure that makes reference to respects, subjects and environments. This relational nature is illustrated by the analogy commonly drawn between value and color. Color perception, as recognized by the ecological concept, is relational and dependent on subject and environment. In a similar way, value is relational and dependent on subject and environment. This makes value subjective, but also objective in that it is grounded on facts about mattering. At bottom, values are complex relational facts. The view presented here, unlike other prominent relational and naturalistic conceptions of value, recognizes the full range of valuation in nature. The advantages of this relational conception are first, that it gets valuation right; second, it provides a framework to better explain and understand valuation in all its varieties and patterns.
Probabilistic modeling of timber structures
DEFF Research Database (Denmark)
Köhler, Jochen; Sørensen, John Dalsgaard; Faber, Michael Havbro
2007-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) [Joint Committee of Structural Safety. Probabilistic Model Code, Internet Publ...
Douanla Tayo, Lionel; Abomo Fouda, Marcel Olivier
2015-01-01
This study aims at assessing the effect of government spending in education on economic growth in Cameroon over the period 1980-2012 using a vector error correction model. The estimated results show that these expenditures had a significant and positive impact on economic growth both in short and long run. The estimated error correction model shows that an increase of 1% of the growth rate of private gross fixed capital formation and government education spending led to increases of 5.03% a...
Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.
2017-10-01
Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS
International Nuclear Information System (INIS)
Du, Z C; Lv, C F; Hong, M S
2006-01-01
A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'
Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A
2013-09-01
Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models
Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng
2014-03-01
Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
pricing factors using the sequential regression approach. Our findings suggest that the two models largely provide the same in-sample fit, but loadings from ordinary and risk-adjusted Campbell-Shiller regressions are generally best matched by the shadow rate models. We also find that the shadow rate...... models perform better than the QTSMs when forecasting bond yields out of sample....
Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems
Directory of Open Access Journals (Sweden)
Muhammad Imran
2014-01-01
for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors
Directory of Open Access Journals (Sweden)
Shuang Wang
2015-12-01
Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Modeling Structural Brain Connectivity
DEFF Research Database (Denmark)
Ambrosen, Karen Marie Sandø
The human brain consists of a gigantic complex network of interconnected neurons. Together all these connections determine who we are, how we react and how we interpret the world. Knowledge about how the brain is connected can further our understanding of the brain’s structural organization, help...... improve diagnosis, and potentially allow better treatment of a wide range of neurological disorders. Tractography based on diffusion magnetic resonance imaging is a unique tool to estimate this “structural connectivity” of the brain non-invasively and in vivo. During the last decade, brain connectivity...... has increasingly been analyzed using graph theoretic measures adopted from network science and this characterization of the brain’s structural connectivity has been shown to be useful for the classification of populations, such as healthy and diseased subjects. The structural connectivity of the brain...
DEFF Research Database (Denmark)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik
2015-01-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two ....... These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
Directory of Open Access Journals (Sweden)
Wei He
Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.
Error detection in GPS observations by means of Multi-process models
DEFF Research Database (Denmark)
Thomsen, Henrik F.
2001-01-01
The main purpose of this article is to present the idea of using Multi-process models as a method of detecting errors in GPS observations. The theory behind Multi-process models, and double differenced phase observations in GPS is presented shortly. It is shown how to model cycle slips in the Multi......-process context by means of a simple simulation. The simulation is used to illustrate how the method works, and it is concluded that the method deserves further investigation....
Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors
Directory of Open Access Journals (Sweden)
G. Yang
2003-06-01
Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.
Multilevel Analysis of Structural Equation Models via the EM Algorithm.
Jo, See-Heyon
The question of how to analyze unbalanced hierarchical data generated from structural equation models has been a common problem for researchers and analysts. Among difficulties plaguing statistical modeling are estimation bias due to measurement error and the estimation of the effects of the individual's hierarchical social milieu. This paper…
Effects of Employing Ridge Regression in Structural Equation Models.
McQuitty, Shaun
1997-01-01
LISREL 8 invokes a ridge option when maximum likelihood or generalized least squares are used to estimate a structural equation model with a nonpositive definite covariance or correlation matrix. Implications of the ridge option for model fit, parameter estimates, and standard errors are explored through two examples. (SLD)
Oscillating water column structural model
Energy Technology Data Exchange (ETDEWEB)
Copeland, Guild [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bull, Diana L [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jepsen, Richard Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gordon, Margaret Ellen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-09-01
An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.
Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."
Moustafa, Margaret Heiss
Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.
Fritz, Matthew S; Kenny, David A; MacKinnon, David P
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model
Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Moroni, Rossana; Blomstedt, Paul; Wilhelm, Lars; Reinikainen, Tapani; Sippola, Erkki; Corander, Jukka
2010-10-10
Headspace gas chromatographic measurements of ethanol content in blood specimens from suspect drunk drivers are routinely carried out in forensic laboratories. In the widely established standard statistical framework, measurement errors in such data are represented by Gaussian distributions for the population of blood specimens at any given level of ethanol content. It is known that the variance of measurement errors increases as a function of the level of ethanol content and the standard statistical approach addresses this issue by replacing the unknown population variances by estimates derived from large sample using a linear regression model. Appropriate statistical analysis of the systematic and random components in the measurement errors is necessary in order to guarantee legally sound security corrections reported to the police authority. Here we address this issue by developing a novel statistical approach that takes into account any potential non-linearity in the relationship between the level of ethanol content and the variability of measurement errors. Our method is based on standard non-parametric kernel techniques for density estimation using a large database of laboratory measurements for blood specimens. Furthermore, we address also the issue of systematic errors in the measurement process by a statistical model that incorporates the sign of the error term in the security correction calculations. Analysis of a set of certified reference materials (CRMs) blood samples demonstrates the importance of explicitly handling the direction of the systematic errors in establishing the statistical uncertainty about the true level of ethanol content. Use of our statistical framework to aid quality control in the laboratory is also discussed. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S
2013-06-01
Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.
A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.
Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-07-24
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems
Directory of Open Access Journals (Sweden)
Fabio Dovis
2013-07-01
Full Text Available Advances in the development of micro-electromechanical systems (MEMS have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS and the inertial navigation system (INS integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs, stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV and the power spectral density (PSD techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade presents error sources with short-term (high-frequency and long-term (low-frequency components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
Effects of model error on cardiac electrical wave state reconstruction using data assimilation.
LaVigne, Nicholas S; Holt, Nathan; Hoffman, Matthew J; Cherry, Elizabeth M
2017-09-01
Reentrant electrical scroll waves have been shown to underlie many cardiac arrhythmias, but the inability to observe locations away from the heart surfaces and the restriction of observations to only one or two state variables have made understanding arrhythmia mechanisms challenging. Recently, we showed that data assimilation from spatiotemporally sparse surrogate observations could be used to reconstruct a reliable time series of state estimates of reentrant cardiac electrical waves including unobserved variables in one and three spatial dimensions. However, real cardiac tissue is unlikely to be described accurately by mathematical models because of errors in model formulation and parameterization as well as intrinsic but poorly described spatial heterogeneity of electrophysiological properties in the heart. Here, we extend our previous work to assess how model error affects the accuracy of cardiac state estimates achieved using data assimilation with the Local Ensemble Transform Kalman Filter. We focus on one-dimensional states of discordant alternans characterized by significant wavelength oscillations. We demonstrate that data assimilation can provide high-quality estimates under a wide range of model error conditions, ranging from varying one or more parameter values to using an entirely different model to generate the truth state. We illustrate how multiplicative and additive inflation can be used to reduce error in the state estimates. Even when the truth state contains underlying spatial heterogeneity, we show that using a homogeneous model in the data assimilation algorithm can achieve good results. Overall, we find data assimilation to be a robust approach for reconstructing complex cardiac electrical states corresponding to arrhythmias even in the presence of model error.
Effects of model error on cardiac electrical wave state reconstruction using data assimilation
LaVigne, Nicholas S.; Holt, Nathan; Hoffman, Matthew J.; Cherry, Elizabeth M.
2017-09-01
Reentrant electrical scroll waves have been shown to underlie many cardiac arrhythmias, but the inability to observe locations away from the heart surfaces and the restriction of observations to only one or two state variables have made understanding arrhythmia mechanisms challenging. Recently, we showed that data assimilation from spatiotemporally sparse surrogate observations could be used to reconstruct a reliable time series of state estimates of reentrant cardiac electrical waves including unobserved variables in one and three spatial dimensions. However, real cardiac tissue is unlikely to be described accurately by mathematical models because of errors in model formulation and parameterization as well as intrinsic but poorly described spatial heterogeneity of electrophysiological properties in the heart. Here, we extend our previous work to assess how model error affects the accuracy of cardiac state estimates achieved using data assimilation with the Local Ensemble Transform Kalman Filter. We focus on one-dimensional states of discordant alternans characterized by significant wavelength oscillations. We demonstrate that data assimilation can provide high-quality estimates under a wide range of model error conditions, ranging from varying one or more parameter values to using an entirely different model to generate the truth state. We illustrate how multiplicative and additive inflation can be used to reduce error in the state estimates. Even when the truth state contains underlying spatial heterogeneity, we show that using a homogeneous model in the data assimilation algorithm can achieve good results. Overall, we find data assimilation to be a robust approach for reconstructing complex cardiac electrical states corresponding to arrhythmias even in the presence of model error.
A method for the quantification of model form error associated with physical systems.
Energy Technology Data Exchange (ETDEWEB)
Wallen, Samuel P.; Brake, Matthew Robert
2014-03-01
In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.
Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.
2018-03-01
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.
Response errors explain the failure of independent-channels models of perception of temporal order
Directory of Open Access Journals (Sweden)
Miguel A García-Pérez
2012-04-01
Full Text Available Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models have been ruled out because two formal properties of these models (monotonicity and parallelism are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal-order judgment data are discussed.
Response errors explain the failure of independent-channels models of perception of temporal order.
García-Pérez, Miguel A; Alcalá-Quintana, Rocío
2012-01-01
Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed.
An approach to improving the structure of error-handling code in the linux kernel
DEFF Research Database (Denmark)
Saha, Suman; Lawall, Julia; Muller, Gilles
2011-01-01
The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....
Error Analysis on the Estimation of Cumulative Infiltration in Soil Using Green and AMPT Model
Directory of Open Access Journals (Sweden)
Muhamad Askari
2006-08-01
Full Text Available Green and Ampt infiltration model is still useful for the infiltration process because of a clear physical basis of the model and of the existence of the model parameter values for a wide range of soil. The objective of thise study was to analyze error on the esimation of cumulative infiltration in sooil using Green and Ampt model and to design laboratory experiment in measuring cumulative infiltration. Parameter of the model was determined based on soil physical properties from laboratory experiment. Newton –Raphson method was esed to estimate wetting front during calculation using visual Basic for Application (VBA in MS Word. The result showed that contributed the highest error in estimation of cumulative infiltration and was followed by K, H0, H1, and t respectively. It also showed that the calculated cumulative infiltration is always lower than both measured cumulative infiltration and volumetric soil water content.
Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole
2014-01-01
Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...... platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations...
Canadian RCM projected climate-change signal and its sensitivity to model errors
Sushama, L.; Laprise, R.; Caya, D.; Frigon, A.; Slivitzky, M.
2006-12-01
Climate change is commonly evaluated as the difference between simulated climates under future and current forcings, based on the assumption that systematic errors in the current-climate simulation do not affect the climate-change signal. In this paper, we investigate the Canadian Regional Climate Model (CRCM) projected climate changes in the climatological means and extremes of selected basin-scale surface fields and its sensitivity to model errors for Fraser, Mackenzie, Yukon, Nelson, Churchill and Mississippi basins, covering the major climate regions in North America, using current (1961-1990) and future climate simulations (2041-2070; A2 and IS92a scenarios) performed with two versions of CRCM.Assessment of errors in both model versions suggests the presence of nonnegligible biases in the surface fields, due primarily to the internal dynamics and physics of the regional model and to the errors in the driving data at the boundaries. In general, results demonstrate that, in spite of the errors in the two model versions, the simulated climate-change signals associated with the long-term monthly climatology of various surface water balance components (such as precipitation, evaporation, snow water equivalent (SWE), runoff and soil moisture) are consistent in sign, but differ in magnitude. The same is found for projected changes to the low-flow characteristics (frequency, timing and return levels) studied here. High-flow characteristics, particularly the seasonal distribution and return levels, appear to be more sensitive to the model version.CRCM climate-change projections indicate an increase in the average annual precipitation for all basins except Mississippi, while annual runoff increases in Fraser, Mackenzie and Yukon basins. A decrease in runoff is projected for Mississippi. A significant decrease in snow cover is projected for all basins, with maximum decrease in Fraser. Significant changes are also noted in the frequency, timing and return levels for low
Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J
2018-01-01
Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.
A heteroscedastic measurement error model for method comparison data with replicate measurements.
Nawarathna, Lakshika S; Choudhary, Pankaj K
2015-03-30
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.
Hardy, Ryan A.; Nerem, R. Steven; Wiese, David N.
2017-12-01
Systematic errors in Gravity Recovery and Climate Experiment (GRACE) monthly mass estimates over the Greenland and Antarctic ice sheets can originate from low-frequency biases in the European Centre for Medium-Range Weather Forecasts (ECMWF) Operational Analysis model, the atmospheric component of the Atmospheric and Ocean Dealising Level-1B (AOD1B) product used to forward model atmospheric and ocean gravity signals in GRACE processing. These biases are revealed in differences in surface pressure between the ECMWF Operational Analysis model, state-of-the-art reanalyses, and in situ surface pressure measurements. While some of these errors are attributable to well-understood discrete model changes and have published corrections, we examine errors these corrections do not address. We compare multiple models and in situ data in Antarctica and Greenland to determine which models have the most skill relative to monthly averages of the dealiasing model. We also evaluate linear combinations of these models and synthetic pressure fields generated from direct interpolation of pressure observations. These models consistently reveal drifts in the dealiasing model that cause the acceleration of Antarctica's mass loss between April 2002 and August 2016 to be underestimated by approximately 4 Gt yr-2. We find similar results after attempting to solve the inverse problem, recovering pressure biases directly from the GRACE Jet Propulsion Laboratory RL05.1 M mascon solutions. Over Greenland, we find a 2 Gt yr-1 bias in mass trend. While our analysis focuses on errors in Release 05 of AOD1B, we also evaluate the new AOD1B RL06 product. We find that this new product mitigates some of the aforementioned biases.
Bao, T.; Diks, C.; Li, H.
We estimate the CAPM model on European stock market data, allowing for asymmetric and fat-tailed return distributions using independent and identically asymmetric power distributed (IIAPD) innovations. The results indicate that the generalized CAPM with IIAPD errors has desirable properties. It is
Modeling human tracking error in several different anti-tank systems
Kleinman, D. L.
1981-01-01
An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
The Preisach hysteresis model: Error bounds for numerical identification and inversion
Czech Academy of Sciences Publication Activity Database
Krejčí, Pavel
2013-01-01
Roč. 6, č. 1 (2013), s. 101-119 ISSN 1937-1632 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : hysteresis * Preisach model * error bounds Subject RIV: BA - General Mathematics http://www.aimsciences.org/journals/displayArticlesnew.jsp?paperID=7779
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Stoustrup, Jakob
1996-01-01
The design problem of filters for robust failure detection and isolation, (FDI) is addressed in this paper. The failure detection problem will be considered with respect to both modeling errors and disturbances. Both an approach based on failure detection observers as well as an approach based on...
Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25
Kane, Michael T.
2017-01-01
By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…
Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models
Hallin, M.; van den Akker, R.; Werker, B.J.M.
2012-01-01
Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the
Error Analysis of Some Demand Simplifications in Hydraulic Models of Water Supply Networks
Directory of Open Access Journals (Sweden)
Joaquín Izquierdo
2013-01-01
Full Text Available Mathematical modeling of water distribution networks makes use of simplifications aimed to optimize the development and use of the mathematical models involved. Simplified models are used systematically by water utilities, frequently with no awareness of the implications of the assumptions used. Some simplifications are derived from the various levels of granularity at which a network can be considered. This is the case of some demand simplifications, specifically, when consumptions associated with a line are equally allocated to the ends of the line. In this paper, we present examples of situations where this kind of simplification produces models that are very unrealistic. We also identify the main variables responsible for the errors. By performing some error analysis, we assess to what extent such a simplification is valid. Using this information, guidelines are provided that enable the user to establish if a given simplification is acceptable or, on the contrary, supplies information that differs substantially from reality. We also develop easy to implement formulae that enable the allocation of inner line demand to the line ends with minimal error; finally, we assess the errors associated with the simplification and locate the points of a line where maximum discrepancies occur.
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Sensitivity of subject-specific models to errors in musculo-skeletal geometry
Carbone, V.; van der Krogt, M.M.; Koopman, H.F.J.M.; Verdonschot, N.
2012-01-01
Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in
Probabilistic Modeling of Timber Structures
DEFF Research Database (Denmark)
Köhler, J.D.; Sørensen, John Dalsgaard; Faber, Michael Havbro
2005-01-01
The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present pro...... probabilistic model for these basic properties is presented and possible refinements are given related to updating of the probabilistic model given new information, modeling of the spatial variation of strength properties and the duration of load effects.......The present paper contains a proposal for the probabilistic modeling of timber material properties. It is produced in the context of the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS) and of the COST action E24 'Reliability of Timber Structures'. The present...
SPAR Model Structural Efficiencies
Energy Technology Data Exchange (ETDEWEB)
John Schroeder; Dan Henry
2013-04-01
The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches
Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models
DEFF Research Database (Denmark)
Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl
We propose a new estimator, the thresholded scaled Lasso, in high dimensional threshold regressions. First, we establish an upper bound on the sup-norm estimation error of the scaled Lasso estimator of Lee et al. (2012). This is a non-trivial task as the literature on highdimensional models has...... focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...
Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition
van der Zee, Kristoffer G.
2010-10-27
A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek
1996-01-01
SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation
Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.
Musso, Fabio; Feverati, Giovanni
2012-01-01
We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus.
Schedin, Staffan; Hallberg, Per; Behndig, Anders
2016-01-20
We propose a numerical three-dimensional (3D) ray-tracing model for the analysis of advanced corneal refractive errors. The 3D modeling was based on measured corneal elevation data by means of Scheimpflug photography. A mathematical description of the measured corneal surfaces from a keratoconus (KC) patient was used for the 3D ray tracing, based on Snell's law of refraction. A model of a commercial intraocular lens (IOL) was included in the analysis. By modifying the posterior IOL surface, it was shown that the imaging quality could be significantly improved. The RMS values were reduced by approximately 50% close to the retina, both for on- and off-axis geometries. The 3D ray-tracing model can constitute a basis for simulation of customized IOLs that are able to correct the advanced, irregular refractive errors in KC.
Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E
2018-05-01
In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.
A New Method to Solving AR Model Parameters Considering Random Errors of Design Matrix
Directory of Open Access Journals (Sweden)
YAO Yibin
2017-11-01
Full Text Available The ordinary least square method could not solve the problem that the error exist both in design matrix and observation vector while compute parameter values of AR model. In this article, a new method is proposed which consider the random errors of design matrix. The source of design matrix and observation vector is same and the amount of parameters contain error can be equal by introducing virtual observation. Then, this problem could be solved under the framework of normal least square by equivalence transformation of observation equation. The result of this new method is superior to SVD method and normal least square method by simulation date and observation data which verify the feasibility and effectiveness of this method.
The approach of Bayesian model indicates media awareness of medical errors
Ravichandran, K.; Arulchelvan, S.
2016-06-01
This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.
Directory of Open Access Journals (Sweden)
Shuting Wan
2015-06-01
Full Text Available Natural wind is stochastic, being characterized by its speed and direction which change randomly and frequently. Because of the certain lag in control systems and the yaw body itself, wind turbines cannot be accurately aligned toward the wind direction when the wind speed and wind direction change frequently. Thus, wind turbines often suffer from a series of engineering issues during operation, including frequent yaw, vibration overruns and downtime. This paper aims to study the effects of yaw error on wind turbine running characteristics at different wind speeds and control stages by establishing a wind turbine model, yaw error model and the equivalent wind speed model that includes the wind shear and tower shadow effects. Formulas for the relevant effect coefficients Tc, Sc and Pc were derived. The simulation results indicate that the effects of the aerodynamic torque, rotor speed and power output due to yaw error at different running stages are different and that the effect rules for each coefficient are not identical when the yaw error varies. These results may provide theoretical support for optimizing the yaw control strategies for each stage to increase the running stability of wind turbines and the utilization rate of wind energy.
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
International Nuclear Information System (INIS)
Malinowski, Kathleen T.; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D'Souza, Warren D.
2012-01-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor–surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor–surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor–surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3–3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.
A dynamic model to predict modulation sidebands of a planetary gear set having manufacturing errors
Inalpolat, Murat; Kahraman, Ahmet
2010-02-01
In this study, a nonlinear time-varying dynamic model is proposed to predict modulation sidebands of planetary gear sets. This discrete dynamic model includes periodically time-varying gear mesh stiffnesses and the nonlinearities associated with tooth separations. The model uses forms of gear mesh interface excitations that are amplitude and frequency modulated due to a class of gear manufacturing errors to predict dynamic forces at all sun-planet and ring-planet gear meshes. The predicted gear mesh force spectra are shown to exhibit well-defined modulation sidebands at frequencies associated with the rotational speeds of gears relative to the planet carrier. This model is further combined with a previously developed model that accounts for amplitude modulations due to rotation of the carrier to predict acceleration spectra at a fixed position in the planetary transmission housing. Individual contributions of each gear error in the form of amplitude and frequency modulations are illustrated through an example analysis. Comparisons are made to measured spectra to demonstrate the capability of the model in predicting the sidebands of a planetary gear set with gear manufacturing errors and a rotating carrier.
Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.
2015-12-01
The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR
Tapsoba, Jean de Dieu; Lee, Shen-Ming; Wang, Ching-Yun
2014-02-20
Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors, and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. We investigated its finite-sample performance numerically. Our method is illustrated by an application to real data from an HIV vaccine study. Copyright © 2013 John Wiley & Sons, Ltd.
Structure of the standard model
Energy Technology Data Exchange (ETDEWEB)
Langacker, Paul [Pennsylvania Univ., PA (United States). Dept. of Physics
1996-07-01
This lecture presents the structure of the standard model, approaching the following aspects: the standard model Lagrangian, spontaneous symmetry breaking, gauge interactions, covering charged currents, quantum electrodynamics, the neutral current and gauge self-interactions, and problems with the standard model, such as gauge, fermion, Higgs and hierarchy, strong C P and graviton problems.
Autcha Araveeporn
2013-01-01
This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...
Directory of Open Access Journals (Sweden)
Mahmudul Mannan Toy
2011-01-01
Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.
Generative models for chemical structures.
White, David; Wilson, Richard C
2010-07-26
We apply recently developed techniques for pattern recognition to construct a generative model for chemical structure. This approach can be viewed as ligand-based de novo design. We construct a statistical model describing the structural variations present in a set of molecules which may be sampled to generate new structurally similar examples. We prevent the possibility of generating chemically invalid molecules, according to our implicit hydrogen model, by projecting samples onto the nearest chemically valid molecule. By populating the input set with molecules that are active against a target, we show how new molecules may be generated that will likely also be active against the target.
Directory of Open Access Journals (Sweden)
Claudimar Pereira da Veiga
2012-08-01
Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.
2015-02-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.
Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.
2009-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
Directory of Open Access Journals (Sweden)
Wenjuan Wei
Full Text Available Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0, the diffusion coefficient (D, and the partition coefficient (K, can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.
Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping
2013-01-01
Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Error rates in bite mark analysis in an in vivo animal model.
Avon, S L; Victor, C; Mayhall, J T; Wood, R E
2010-09-10
Recent judicial decisions have specified that one foundation of reliability of comparative forensic disciplines is description of both scientific approach used and calculation of error rates in determining the reliability of an expert opinion. Thirty volunteers were recruited for the analysis of dermal bite marks made using a previously established in vivo porcine-skin model. Ten participants were recruited from three separate groups: dentists with no experience in forensics, dentists with an interest in forensic odontology, and board-certified diplomates of the American Board of Forensic Odontology (ABFO). Examiner demographics and measures of experience in bite mark analysis were collected for each volunteer. Each participant received 18 completely documented, simulated in vivo porcine bite mark cases and three paired sets of human dental models. The paired maxillary and mandibular models were identified as suspect A, suspect B, and suspect C. Examiners were tasked to determine, using an analytic method of their own choosing, whether each bite mark of the 18 bite mark cases provided was attributable to any of the suspect dentitions provided. Their findings were recorded on a standardized recording form. The results of the study demonstrated that the group of inexperienced examiners often performed as well as the board-certified group, and both inexperienced and board-certified groups performed better than those with an interest in forensic odontology that had not yet received board certification. Incorrect suspect attributions (possible false inculpation) were most common among this intermediate group. Error rates were calculated for each of the three observer groups for each of the three suspect dentitions. This study demonstrates that error rates can be calculated using an animal model for human dermal bite marks, and although clinical experience is useful, other factors may be responsible for accuracy in bite mark analysis. Further, this study demonstrates
Dagne, Getachew A; Huang, Yangxin
2013-09-30
Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
Directory of Open Access Journals (Sweden)
C. Knote
2018-02-01
Full Text Available The Background Error Analysis Testbed (BEATBOX is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX to the Kinetic Pre-Processor (KPP, this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Modeling conically scanning lidar error in complex terrain with WAsP Engineering
Energy Technology Data Exchange (ETDEWEB)
Bingoel, F.; Mann, J.; Foussekis, D.
2008-11-15
Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10%. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model. An abbreviated version of this report has been submitted to Meteorologische Zeitschrift. This work is partly financed through the UPWIND project (WP6, D3) funded by the European Commission. (au)
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Error characterization of CO2 vertical mixing in the atmospheric transport model WRF-VPRM
Directory of Open Access Journals (Sweden)
U. Karstens
2012-03-01
Full Text Available One of the dominant uncertainties in inverse estimates of regional CO2 surface-atmosphere fluxes is related to model errors in vertical transport within the planetary boundary layer (PBL. In this study we present the results from a synthetic experiment using the atmospheric model WRF-VPRM to realistically simulate transport of CO2 for large parts of the European continent at 10 km spatial resolution. To elucidate the impact of vertical mixing error on modeled CO2 mixing ratios we simulated a month during the growing season (August 2006 with different commonly used parameterizations of the PBL (Mellor-Yamada-Janjić (MYJ and Yonsei-University (YSU scheme. To isolate the effect of transport errors we prescribed the same CO2 surface fluxes for both simulations. Differences in simulated CO2 mixing ratios (model bias were on the order of 3 ppm during daytime with larger values at night. We present a simple method to reduce this bias by 70–80% when the true height of the mixed layer is known.
The Application of Model Life Table Systems in China: Assessment of System Bias and Error
Directory of Open Access Journals (Sweden)
Songbo Hu
2014-12-01
Full Text Available and projection. Although China is the world's most populous country with approximately a fifth of the world's population, none of the empirical tables from mainland China were used in calibrating the existing models. In this paper, we applied recent three model life table systems with different inputs to China mortality data to investigate whether or not these systems truly reflect Chinese mortality epidemiological patterns and whether or not system biases exist. The resulting residuals show that, in most cases, the male infant mortality rate (1q0, adult mortality rate (45q15 and old age mortality rate (20q60 have a strong bias towards being overestimated and the life expectancy at birth (e0 bias is underestimated. We also give the detailed results for each case. Furthermore, we found that the average relative errors (AREs for females are more than those for males for e0, 45q15 and 20q60, but for 1q0, males have larger AREs in the Wilmoth and Murray systems. We also found that the urban population has more errors than the rural population in almost all cases. Finally, by comparing the AREs with 10 other countries, we found the errors for China are more than those for other countries in most cases. It is concluded that these existing model life table systems cannot accurately reflect Chinese mortality epidemiological situations and trajectories. Therefore, model life tables should be used with caution when applied to China on the basis of 5q0.
The application of model life table systems in China: assessment of system bias and error.
Hu, Songbo; Yu, Chuanhua
2014-12-01
and projection. Although China is the world's most populous country with approximately a fifth of the world's population, none of the empirical tables from mainland China were used in calibrating the existing models. In this paper, we applied recent three model life table systems with different inputs to China mortality data to investigate whether or not these systems truly reflect Chinese mortality epidemiological patterns and whether or not system biases exist. The resulting residuals show that, in most cases, the male infant mortality rate (1q0), adult mortality rate (45q15) and old age mortality rate (20q60) have a strong bias towards being overestimated and the life expectancy at birth (e0) bias is underestimated. We also give the detailed results for each case. Furthermore, we found that the average relative errors (AREs) for females are more than those for males for e0, 45q15 and 20q60, but for 1q0, males have larger AREs in the Wilmoth and Murray systems. We also found that the urban population has more errors than the rural population in almost all cases. Finally, by comparing the AREs with 10 other countries, we found the errors for China are more than those for other countries in most cases. It is concluded that these existing model life table systems cannot accurately reflect Chinese mortality epidemiological situations and trajectories. Therefore, model life tables should be used with caution when applied to China on the basis of 5q0.
Popov, I; Valašková, J; Štefaničková, J; Krásnik, V
2017-01-01
A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.
Temporal structures in shell models
DEFF Research Database (Denmark)
Okkels, F.
2001-01-01
The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...
The effect of error models in the multiscale inversion of binary permeability fields
Ray, J.; Bloemenwaanders, B. V.; McKenna, S. A.; Marzouk, Y. M.
2010-12-01
We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few
Models for Ballistic Wind Measurement Error Analysis. Volume II. Users’ Manual.
1983-01-01
TEST CHART NATIONAL li ’A il (If IANP) ARDl A -CR-83-0008-1 Reports Control Symbol OSO - 1366 MODELS FOR BALLISTIC WIND MEASUREMENTERROR ANALYSIS...AD-A129 360 MODELS FOR BALLSTIC WIND MEASUREMENT ERROR ANALYSIS VO UME 11USERS’ MAN..U) NEW REXICO STATE UNIV LAS U SS CRUCES PHYSICAL SCIENCE LAR...ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER SASL-CR-83-0008-1 4. TITLE (and Subtitle) 5 TYPE OF REPORT & PERIOD COVERED MODELS FOR BALLISTIC WIND
Directory of Open Access Journals (Sweden)
Georgia Feideropoulou
2004-09-01
Full Text Available We extend a stochastic model of hierarchical dependencies between wavelet coefficients of still images to the spatiotemporal decomposition of video sequences, obtained by a motion-compensated 2D+t wavelet decomposition. We propose new estimators for the parameters of this model which provide better statistical performances. Based on this model, we deduce an optimal predictor of missing samples in the spatiotemporal wavelet domain and use it in two applications: quality enhancement and error concealment of scalable video transmitted over packet networks. Simulation results show significant quality improvement achieved by this technique with different packetization strategies for a scalable video bit stream.
Application of the epidemiological model in studying human error in aviation
Cheaney, E. S.; Billings, C. E.
1981-01-01
An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.
Discrete Discriminant analysis based on tree-structured graphical models
DEFF Research Database (Denmark)
Perez de la Cruz, Gonzalo; Eslava, Guillermina
The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...
Directory of Open Access Journals (Sweden)
Da Liu
2013-01-01
Full Text Available A combined forecast with weights adaptively selected and errors calibrated by Hidden Markov model (HMM is proposed to model the day-ahead electricity price. Firstly several single models were built to forecast the electricity price separately. Then the validation errors from every individual model were transformed into two discrete sequences: an emission sequence and a state sequence to build the HMM, obtaining a transmission matrix and an emission matrix, representing the forecasting ability state of the individual models. The combining weights of the individual models were decided by the state transmission matrixes in HMM and the best predict sample ratio of each individual among all the models in the validation set. The individual forecasts were averaged to get the combining forecast with the weights obtained above. The residuals of combining forecast were calibrated by the possible error calculated by the emission matrix of HMM. A case study of day-ahead electricity market of Pennsylvania-New Jersey-Maryland (PJM, USA, suggests that the proposed method outperforms individual techniques of price forecasting, such as support vector machine (SVM, generalized regression neural networks (GRNN, day-ahead modeling, and self-organized map (SOM similar days modeling.
Sensitivity of ice flow in Greenland to errors in model forcing, using the Ice Sheet System Model
Schlegel, N.; Larour, E. Y.; Seroussi, H.; Morlighem, M.; Halkides, D. J.
2013-12-01
A clear understanding of how ice sheets respond to climate change requires an examination ice sheet model uncertainty. This includes the quantification of uncertainties associated with model forcing, as well as the clarification of exactly what error sources most influence modeled ice flow dynamics. The Ice Sheet System Model (ISSM) is a finite-element model capable of simulating transient ice flow on an anisotropic mesh that can be refined to higher resolutions. This model also considers longitudinal stresses in the areas of enhanced ice flow, offering a distinct advantage in terms of modeling fast-flowing outlet glaciers. With use of established uncertainty quantification capabilities within ISSM, we compare the sensitivity of ice flow within key basins of the Greenland Ice Sheet to errors in various forcing, including surface mass balance components and temperature. We investigate how these errors propagate through the model as uncertainties in estimates of Greenland ice discharge. This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Modeling, Analysis and Prediction (MAP) Program.
Martinez, William; Browning, David; Varrin, Pamela; Sarnoff Lee, Barbara; Bell, Sigall K
2017-05-10
To test whether an educational model involving patients and family members (P/F) in medical error disclosure training for interprofessional clinicians can narrow existing gaps between clinician and P/F views about disclosure. Parallel presurveys/postsurveys using Likert scale questions for clinicians and P/F. Baseline surveys were completed by 91% (50/55) of clinicians who attended the workshops and 74% (65/88) of P/F from a hospital patient and family advisory council. P/F's baseline views about disclosure were significantly different from clinicians' in 70% (7/10) of the disclosure expectation items and 100% (3/3) of the disclosure vignette items. For example, compared with clinicians, P/F more strongly agreed that "patients want to know all the details of what happened" and more strongly disagreed that "patients find explanation(s) more confusing than helpful." In the medication error vignette, compared with clinicians, P/F more strongly agreed that the error should be disclosed and that the patient would want to know and more strongly disagreed that disclosure would do more harm than good (all P medical error disclosure and brings patient and clinicians views closer together.
Developing a feeling for error: Practices of monitoring and modelling air pollution data
Directory of Open Access Journals (Sweden)
Emma Garnett
2016-08-01
Full Text Available This paper is based on ethnographic research of data practices in a public health project called Weather Health and Air Pollution. (All names are pseudonyms. I examine two different kinds of practices that make air pollution data, focusing on how they relate to particular modes of sensing and articulating air pollution. I begin by describing the interstitial spaces involved in making measurements of air pollution at monitoring sites and in the running of a computer simulation. Specifically, I attend to a shared dimension of these practices, the checking of a numerical reading for error. Checking a measurement for error is routine practice and a fundamental component of making data, yet these are also moments of interpretation, where the form and meaning of numbers are ambiguous. Through two case studies of modelling and monitoring data practices, I show that making a ‘good’ (error free measurement requires developing a feeling for the instrument–air pollution interaction in terms of the intended functionality of the measurements made. These affective dimensions of practice are useful analytically, making explicit the interaction of standardised ways of knowing and embodied skill in stabilising data. I suggest that environmental data practices can be studied through researchers’ materialisation of error, which complicate normative accounts of Big Data and highlight the non-linear and entangled relations that are at work in the making of stable, accurate data.
Sjarif, Indra Nurcahyo; 小谷, 浩示; Lin, Ching-Yang
2011-01-01
This paper investigates the causal relationship between fishery's exports and its economic growth in Indonesia by utilizing cointegration and error-correction models. Using annual data from 1969 to 2005, we find the evidence that there exist the long-run relationship as well as bi-directional causality between exports and economic growth in Indonesia's fishery sub-sector. To the best of our knowledge, this is the first research that examine this issue focusing on a natural resource based indu...
Directory of Open Access Journals (Sweden)
Christian NZENGUE PEGNET
2011-07-01
Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.
DEFF Research Database (Denmark)
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....
Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.
2011-01-01
A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from
Van Zeijl, H.W.; Bijnen, F.G.C.; Slabbekoorn, J.
2004-01-01
To validate the Front- To Backwafer Alignment (FTBA) calibration and to investigate process related overlay errors, electrical overlay test structures are used that requires FTBA [1]. Anisotropic KOH etch through the wafer is applied to transfer the backwafer pattern to the frontwafer. Consequently,
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.
2017-11-01
Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Farzad Shahabian
2013-12-01
Full Text Available This study aims to undertake a statistical study to evaluate the accuracy of nine models that have been previously proposed for estimating the ultimate resistance of plate girders subjected to patch loading. For each model, mean errors and standard errors, as well as the probability of underestimating or overestimating patch load resistance, are estimated and the resultant values are compared one to another. Prior to that, the models are initially calibrated in order to improve interaction formulae using an experimental data set collected from the literature. The models are then analyzed by computing design factors associated with a target risk level (probability of exceedance. These models are compared one to another considering uncertainties existed in material and geometrical properties. The Monte Carlo simulation method is used to generate random variables. The statistical parameters of the calibrated models are calculated for various coefficients of variations regardless of their correlation with the random resistance variables. These probabilistic results are very useful for evaluating the stochastic sensitivity of the calibrated models.
International Nuclear Information System (INIS)
Olsen, A.R.; Cunningham, M.E.
1980-01-01
With the increasing sophistication and use of computer codes in the nuclear industry, there is a growing awareness of the need to identify and quantify the uncertainties of these codes. In any effort to model physical mechanisms, the results obtained from the model are subject to some degree of uncertainty. This uncertainty has two primary sources. First, there is uncertainty in the model's representation of reality. Second, there is an uncertainty in the input data required by the model. If individual models are combined into a predictive sequence, the uncertainties from an individual model will propagate through the sequence and add to the uncertainty of results later obtained. Nuclear fuel rod stored-energy models, characterized as a combination of numerous submodels, exemplify models so affected. Each submodel depends on output from previous calculations and may involve iterative interdependent submodel calculations for the solution. The iterative nature of the model and the cost of running the model severely limit the uncertainty analysis procedures. An approach for uncertainty analysis under these conditions was designed for the particular case of stored-energy models. It is assumed that the complicated model is correct, that a simplified model based on physical considerations can be designed to approximate the complicated model, and that linear error propagation techniques can be used on the simplified model
Penn, C. A.; Clow, D. W.; Sexstone, G. A.
2017-12-01
Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a
A spatial error model with continuous random effects and an application to growth convergence
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Sheu, Yun Robert; Feder, Elie; Balsim, Igor; Levin, Victor F; Bleicher, Andrew G; Branstetter, Barton F
2010-06-01
Peer review is an essential process for physicians because it facilitates improved quality of patient care and continuing physician learning and improvement. However, peer review often is not well received by radiologists who note that it is time intensive, is subjective, and lacks a demonstrable impact on patient care. Current advances in peer review include the RADPEER() system, with its standardization of discrepancies and incorporation of the peer-review process into the PACS itself. The purpose of this study was to build on RADPEER and similar systems by using a mathematical model to optimally select the types of cases to be reviewed, for each radiologist undergoing review, on the basis of the past frequency of interpretive error, the likelihood of morbidity from an error, the financial cost of an error, and the time required for the reviewing radiologist to interpret the study. The investigators compiled 612,890 preliminary radiology reports authored by residents and attending radiologists at a large tertiary care medical center from 1999 to 2004. Discrepancies between preliminary and final interpretations were classified by severity and validated by repeat review of major discrepancies. A mathematical model was then used to calculate, for each author of a preliminary report, the combined morbidity and financial costs of expected errors across 3 modalities (MRI, CT, and conventional radiography) and 4 departmental divisions (neuroradiology, abdominal imaging, musculoskeletal imaging, and thoracic imaging). A customized report was generated for each on-call radiologist that determined the category (modality and body part) with the highest total cost function. A universal total cost based on probability data from all radiologists was also compiled. The use of mathematical models to guide case selection could optimize the efficiency and effectiveness of physician time spent on peer review and produce more concrete and meaningful feedback to radiologists
Handbook of structural equation modeling
Hoyle, Rick H
2012-01-01
The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu
Bodmer, James E; English, Anthony; Brady, Megan; Blackwell, Ken; Haxhinasto, Kari; Fotedar, Sunaina; Borgman, Kurt; Bai, Er-Wei; Moy, Alan B
2005-09-01
Transendothelial impedance across an endothelial monolayer grown on a microelectrode has previously been modeled as a repeating pattern of disks in which the electrical circuit consists of a resistor and capacitor in series. Although this numerical model breaks down barrier function into measurements of cell-cell adhesion, cell-matrix adhesion, and membrane capacitance, such solution parameters can be inaccurate without understanding model stability and error. In this study, we have evaluated modeling stability and error by using a chi(2) evaluation and Levenberg-Marquardt nonlinear least-squares (LM-NLS) method of the real and/or imaginary data in which the experimental measurement is compared with the calculated measurement derived by the model. Modeling stability and error were dependent on current frequency and the type of experimental data modeled. Solution parameters of cell-matrix adhesion were most susceptible to modeling instability. Furthermore, the LM-NLS method displayed frequency-dependent instability of the solution parameters, regardless of whether the real or imaginary data were analyzed. However, the LM-NLS method identified stable and reproducible solution parameters between all types of experimental data when a defined frequency spectrum of the entire data set was selected on the basis of a criterion of minimizing error. The frequency bandwidth that produced stable solution parameters varied greatly among different data types. Thus a numerical model based on characterizing transendothelial impedance as a resistor and capacitor in series and as a repeating pattern of disks is not sufficient to characterize the entire frequency spectrum of experimental transendothelial impedance.
National Research Council Canada - National Science Library
Klempner, Scott
2008-01-01
.... Error modeling and propagation methodology is developed for each link in the imaging chain, and representative values are determined for the purpose of exercising the model and observing the system...
In-Body Ranging with Ultra-Wideband Signals: Techniques and Modeling of the Ranging Error
Directory of Open Access Journals (Sweden)
Muzaffer Kanaan
2017-01-01
Full Text Available Results about the problem of accurate ranging within the human body using ultra-wideband signals are shown. The ability to accurately measure the range between a sensor implanted in the human body and an external receiver can make a number of new medical applications such as better wireless capsule endoscopy, next-generation microrobotic surgery systems, and targeted drug delivery systems possible. The contributions of this paper are twofold. First, we propose two novel range estimators: one based on an implementation of the so-called CLEAN algorithm for estimating channel profiles and another based on neural networks. Second, we develop models to describe the statistics of the ranging error for both types of estimators. Such models are important for the design and performance analysis of localization systems. It is shown that the ranging error in both cases follows a heavy-tail distribution known as the Generalized Extreme Value distribution. Our results also indicate that the estimator based on neural networks outperforms the CLEAN-based estimator, providing ranging errors better than or equal to 3.23 mm with 90% probability.
Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.
Pournasr, Behshad; Duncan, Stephen A
2017-11-01
Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.
Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
Kinnaert, X.; Gaucher, E.; Achauer, U.; Kohl, T.
2016-08-01
Earthquake absolute location errors which can be encountered in an underground reservoir are investigated. In such an exploitation context, earthquake hypocentre errors can have an impact on the field development and economic consequences. The approach using the state-of-the-art techniques covers both the location uncertainty and the location inaccuracy—or bias—problematics. It consists, first, in creating a 3-D synthetic seismic cloud of events in the reservoir and calculating the seismic traveltimes to a monitoring network assuming certain propagation conditions. In a second phase, the earthquakes are relocated with assumptions different from the initial conditions. Finally, the initial and relocated hypocentres are compared. As a result, location errors driven by the seismic onset time picking uncertainties and inaccuracies are quantified in 3-D. Effects induced by erroneous assumptions associated with the velocity model are also modelled. In particular, 1-D velocity model uncertainties, a local 3-D perturbation of the velocity and a 3-D geostructural model are considered. The present approach is applied to the site of Rittershoffen (Alsace, France), which is one of the deep geothermal fields existing in the Upper Rhine Graben. This example allows setting realistic scenarios based on the knowledge of the site. In that case, the zone of interest, monitored by an existing seismic network, ranges between 1 and 5 km depth in a radius of 2 km around a geothermal well. Well log data provided a reference 1-D velocity model used for the synthetic earthquake relocation. The 3-D analysis highlights the role played by the seismic network coverage and the velocity model in the amplitude and orientation of the location uncertainties and inaccuracies at subsurface levels. The location errors are neither isotropic nor aleatoric in the zone of interest. This suggests that although location inaccuracies may be smaller than location uncertainties, both quantities can have a
DEFF Research Database (Denmark)
Lowes, F.J.; Olsen, Nils
2004-01-01
, led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface......Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...
International Nuclear Information System (INIS)
Jung, W.D.; Kim, T.W.; Park, C.K.
1991-01-01
This paper presents an integrated approach to prediction of human error probabilities with a computer program, HREP (Human Reliability Evaluation Program). HREP is developed to provide simplicity in Human Reliability Analysis (HRA) and consistency in the obtained results. The basic assumption made in developing HREP is that human behaviors can be quantified in two separate steps. One is the diagnosis error evaluation step and the other the response error evaluation step. HREP integrates the Human Cognitive Reliability (HCR) model and the HRA Event Tree technique. The former corresponds to the Diagnosis model, and the latter the Response model. HREP consists of HREP-IN and HREP-MAIN. HREP-IN is used to generate input files. HREP-MAIN is used to evaluate selected human errors in a given input file. HREP-MAIN is divided into three subsections ; the diagnosis evaluation step, the subaction evaluation step and the modification step. The final modification step takes dependency and/or recovery factors into consideration. (author)
Dong, Li-hu; Li, Feng-ri; Jia, Wei-wei; Liu, Fu-xiang; Wang, He-zhi
2011-10-01
Based on the biomass data of 516 sampling trees, and by using non-linear error-in-variable modeling approach, the compatible models for the total biomass and the biomass of six components including aboveground part, underground part, stem, crown, branch, and foliage of 15 major tree species (or groups) in Heilongjiang Province were established, and the best models for the total biomass and components biomass were selected. The compatible models based on total biomass were developed by adopting the method of joint control different level ratio function. The heteroscedasticity of the models for total biomass was eliminated with log transformation, and the weighted regression was applied to the models for each individual component. Among the compatible biomass models established for the 15 major species (or groups) , the model for total biomass had the highest prediction precision (90% or more), followed by the models for aboveground part and stem biomass, with a precision of 87.5% or more. The prediction precision of the biomass models for other components was relatively low, but it was still greater than 80% for most test tree species. The modeling efficiency (EF) values of the total, aboveground part, and stem biomass models for all the tree species (or groups) were over 0.9, and the EF values of the underground part, crown, branch, and foliage biomass models were over 0.8.
Automated evolutionary restructuring of workflows to minimise errors via stochastic model checking
DEFF Research Database (Denmark)
Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter
2014-01-01
This paper presents a framework for the automated restructuring of workflows that allows one to minimise the impact of errors on a production workflow. The framework allows for the modelling of workflows by means of a formalised subset of the Business Process Modelling and Notation (BPMN) language......, a well-established visual language for modelling workflows in a business context. The framework’s modelling language is extended to include the tracking of real-valued quantities associated with the process (such as time, cost, temperature). In addition, this language also allows for an intention...... by means of a case study from the food industry. Through this case study we explore the extent to which the risk of production faults can be reduced and the impact of these can be minimised, primarily through restructuring of the production workflows. This approach is fully automated and only the modelling...
Probabilistic models for structured sparsity
DEFF Research Database (Denmark)
Andersen, Michael Riis
of each time series is decomposed into a non-negative linear combination of elements from a dictionary of shared covariance matrix components. A variational Bayes algorithm is derived for approximate posterior inference. The proposed model is validated using a functional magnetic resonance imaging (f......Sparsity has become an increasingly popular choice of regularization in machine learning and statistics. The sparsity assumption for a matrixX means that most of the entries in X are equal to exactly zero. Structured sparsity is generalization of sparsity and assumes that the set of locations...... of the non-zero coefficients in X contains structure that can be exploited. This thesis deals with probabilistic models for structured sparsity for regularization of ill-posed problems. The aim of the thesis is two-fold; to construct sparsity promoting prior distributions for structured sparsity...
Kipnis, Victor; Freedman, Laurence S; Carroll, Raymond J; Midthune, Douglas
2016-03-01
Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study. © 2015, The International Biometric Society.
Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution
Wild, O.; Prather, M. J.
2005-12-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.
Ideal point error for model assessment in data-driven river flow forecasting
Directory of Open Access Journals (Sweden)
C. W. Dawson
2012-08-01
Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.
Web service availability-impact of error recovery and traffic model
International Nuclear Information System (INIS)
Martinello, Magnos; Kaa-hat niche, Mohamed; Kanoun, Karama
2005-01-01
Internet is often used for transaction based applications such as online banking, stock trading and shopping, where the service interruption or outages are unacceptable. Therefore, it is important for designers of such applications to analyze how hardware, software and performance related failures affect the quality of service delivered to the users. This paper presents analytical models for evaluating the service availability of web cluster architectures. A composite performance and availability modeling approach is defined considering various causes of service unavailability. In particular, web cluster systems are modeled taking into account: two error recovery strategies (client transparent and non-client-transparent) as well as two traffic models (Poisson and modulated Poisson). Sensitivity analysis results are presented to show their impact on the web service availability. The obtained results provide useful guidelines to web designers
Oliveira, R. A. J.; Vila, D. A.; Maggioni, V.; Morales, C. A.
2015-12-01
This study aims to investigate, over the different regions of Brazil, the error characteristics and uncertainties (random and systematic errors components) in satellite-based precipitation estimates by comparing the Goddard Profiling Algorithm (GPROF), through different sensors from GPM database (such as GMI, TMI, SSMI/S, AMSR2, MHS, among others), and Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithms. The analyses are made with other ground (S- and X-band dual polarization weather radar) and space (e.g., TRMM-PR and GPM-DPR [at Ku-band] active radars) based rainfall estimates as references at instantaneous timescales and respecting their temporal limitations. The Precipitation Uncertainties for Satellite Hydrology (PUSH) framework is used for the analysis and uncertainties characterization and error modeling. Specially, this study are focused on specific regions of Brazil, where the campaigns of the CHUVA project occurred (CHUVA/GoAmazon [IOP1 and 2] in Amazon and over southern Brazil where the S-band dual polarization radars (e.g., the FCTH radar) are located.
Fast Outage Probability Simulation for FSO Links with a Generalized Pointing Error Model
Ben Issaid, Chaouki
2017-02-07
Over the past few years, free-space optical (FSO) communication has gained significant attention. In fact, FSO can provide cost-effective and unlicensed links, with high-bandwidth capacity and low error rate, making it an exciting alternative to traditional wireless radio-frequency communication systems. However, the system performance is affected not only by the presence of atmospheric turbulences, which occur due to random fluctuations in the air refractive index but also by the existence of pointing errors. Metrics, such as the outage probability which quantifies the probability that the instantaneous signal-to-noise ratio is smaller than a given threshold, can be used to analyze the performance of this system. In this work, we consider weak and strong turbulence regimes, and we study the outage probability of an FSO communication system under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach which is based on the exponential twisting technique to offer fast and accurate results.
Khoshgoftaar, Taghi M; Van Hulse, Jason; Napolitano, Amri
2010-05-01
Neural network algorithms such as multilayer perceptrons (MLPs) and radial basis function networks (RBFNets) have been used to construct learners which exhibit strong predictive performance. Two data related issues that can have a detrimental impact on supervised learning initiatives are class imbalance and labeling errors (or class noise). Imbalanced data can make it more difficult for the neural network learning algorithms to distinguish between examples of the various classes, and class noise can lead to the formulation of incorrect hypotheses. Both class imbalance and labeling errors are pervasive problems encountered in a wide variety of application domains. Many studies have been performed to investigate these problems in isolation, but few have focused on their combined effects. This study presents a comprehensive empirical investigation using neural network algorithms to learn from imbalanced data with labeling errors. In particular, the first component of our study investigates the impact of class noise and class imbalance on two common neural network learning algorithms, while the second component considers the ability of data sampling (which is commonly used to address the issue of class imbalance) to improve their performances. Our results, for which over two million models were trained and evaluated, show that conclusions drawn using the more commonly studied C4.5 classifier may not apply when using neural networks.
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.
Directory of Open Access Journals (Sweden)
Nadeem A Malik
Full Text Available Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS, Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K due to the sweeping effect decreases with increasing pair separation (σl, such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.
Modeling the North American vertical datum of 1988 errors in the conterminous United States
Li, X.
2018-02-01
A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.
Modeling the North American vertical datum of 1988 errors in the conterminous United States
Directory of Open Access Journals (Sweden)
Li X.
2018-01-01
Full Text Available A large systematic difference (ranging from −20 cm to +130 cm was found between NAVD 88 (North AmericanVertical Datum of 1988 and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA such as the Factor Analysis (FA are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.
Long-Run Effects in Large Heterogeneous Panel Data Models with Cross-Sectionally Correlated Errors
Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M Hashem; Raissi, Mehdi
2016-01-01
This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...
Long-run effects in large heterogenous panel data models with cross-sectionally correlated errors
Chudik, Alexander; Mohaddes, Kamiar; Pesaran, M. Hashem; Raissi, Mehdi
2015-01-01
This paper develops a cross-sectionally augmented distributed lag (CS-DL) approach to the estimation of long-run effects in large dynamic heterogeneous panel data models with cross-sectionally dependent errors. The asymptotic distribution of the CS-DL estimator is derived under coefficient heterogeneity in the case where the time dimension (T) and the cross-section dimension (N) are both large. The CS-DL approach is compared with more standard panel data estimators that are based on autoregre...
A queueing model for error control of partial buffer sharing in ATM
Directory of Open Access Journals (Sweden)
Ahn Boo Yong
1999-01-01
Full Text Available We model the error control of the partial buffer sharing of ATM by a queueing system M 1 , M 2 / G / 1 / K + 1 with threshold and instantaneous Bernoulli feedback. We first derive the system equations and develop a recursive method to compute the loss probabilities at an arbitrary time epoch. We then build an approximation scheme to compute the mean waiting time of each class of cells. An algorithm is developed for finding the optimal threshold and queue capacity for a given quality of service.
Retrievals from GOMOS stellar occultation measurements using characterization of modeling errors
Directory of Open Access Journals (Sweden)
V. F. Sofieva
2010-08-01
Full Text Available In this paper, we discuss the development of the inversion algorithm for the GOMOS (Global Ozone Monitoring by Occultation of Star instrument on board the Envisat satellite. The proposed algorithm takes accurately into account the wavelength-dependent modeling errors, which are mainly due to the incomplete scintillation correction in the stratosphere. The special attention is paid to numerical efficiency of the algorithm. The developed method is tested on a large data set and its advantages are demonstrated. Its main advantage is a proper characterization of the uncertainties of the retrieved profiles of atmospheric constituents, which is of high importance for data assimilation, trend analyses and validation.
Recursive prediction error methods for online estimation in nonlinear state-space models
Directory of Open Access Journals (Sweden)
Dag Ljungquist
1994-04-01
Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.
Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors
da Silva, Andre F. C.; Colonius, Tim
2017-11-01
The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the